How to properly configure my terraform folders for AWS deployment? - amazon-web-services

This is how my folder structure looks like
total 248
drwxrwxr-x 6 miki miki 4096 Mar 7 16:01 ./
drwxrwxr-x 5 miki miki 4096 Mar 3 14:53 ../
-rw-rw-r-- 1 miki miki 460 Mar 4 11:59 application_01.tf
drwxrwxr-x 3 miki miki 4096 Mar 8 10:54 application-server/
-rw-rw-r-- 1 miki miki 862 Mar 4 09:06 ecr.tf
-rw-rw-r-- 1 miki miki 3169 Mar 4 11:36 iam.tf
-rw-rw-r-- 1 miki miki 1023 Mar 4 14:11 jenkins_01.tf
drwxrwxr-x 2 miki miki 4096 Mar 7 15:33 jenkins-config/
-rw------- 1 miki miki 3401 Mar 3 09:41 jenkins.key
-r-------- 1 miki miki 753 Mar 3 09:41 jenkins.pem
drwxrwxr-x 3 miki miki 4096 Mar 8 10:53 jenkins-server/
I run yesterday both terraform init and terraform apply
I found out that my application-server folder content is not implemented.
I have script file(update ,install docker,login to ECR, and pull image)
sudo yum update -y
sudo amazon-linux-extras install docker
sudo systemctl start docker
sudo systemctl enable docker
/bin/sh -e -c 'echo $(aws ecr get-login-password --region us-east-1) | docker login -u AWS --password-stdin ${repository_url}'
sudo docker pull ${repository_url}:release
sudo docker run -p 80:8000 ${repository_url}:release
Anyway I checked the instance from the console
I run
terraform plan
and this it says
No changes. Your infrastructure matches the configuration.
Your configuration already matches the changes detected above. If you'd like to update the Terraform state to match, create and apply a refresh-only plan:
terraform apply -refresh-only
My application.tf file
module "application-server" {
source = "./application-server"
ami-id = "ami-0742b4e673072066f" # AMI for an Amazon Linux instance for region: us-east-1
iam-instance-profile = aws_iam_instance_profile.simple-web-app.id
key-pair = aws_key_pair.simple-web-app-key.key_name
name = "Simple Web App"
device-index = 0
network-interface-id = aws_network_interface.simple-web-app.id
repository-url = aws_ecr_repository.simple-web-app.repository_url
}
And APPLICATION_SERVER folder
-rw-rw-r-- 1 miki miki 417 Mar 2 11:18 application-server_main.tf
-rw-rw-r-- 1 miki miki 164 Mar 2 11:21 application-server_output.tf
-rw-rw-r-- 1 miki miki 398 Mar 2 11:17 application-server_variables.tf
drwxr-xr-x 3 miki miki 4096 Mar 8 10:54 .terraform/
-rw-r--r-- 1 miki miki 1076 Mar 8 10:54 .terraform.lock.hcl
-rw-rw-r-- 1 miki miki 866 Mar 4 14:39 user_data.sh
And application-server_main.tf
resource "aws_instance" "default" {
ami = var.ami-id
iam_instance_profile = var.iam-instance-profile
instance_type = var.instance-type
key_name = var.key-pair
network_interface {
device_index = var.device-index
network_interface_id = var.network-interface-id
}
user_data = templatefile("${path.module}/user_data.sh", {repository_url = var.repository-url})
tags = {
Name = var.name
}
}
My scirpt is not executed. Why? How to structure properly Terraform across many folders?

Related

How can I SSH to my GCP Kubernetes cluster?

I created cluster in dashboard. From cloudshell,.ssh folder
jhomes369#cloudshell:~/.ssh (leafy-garden-359409)$ ls -la
total 20
drwx------ 2 jhomes369 jhomes369 4096 Oct 4 11:00 .
drwxr-xr-x 7 jhomes369 jhomes369 4096 Oct 3 10:11 ..
-rw------- 1 jhomes369 jhomes369 0 Oct 4 11:00 admin-cluster.key
-rw------- 1 jhomes369 jhomes369 2643 Sep 20 10:18 google_compute_engine
-rw-r--r-- 1 jhomes369 jhomes369 597 Sep 20 10:18 google_compute_engine.pub
-rw-r--r-- 1 jhomes369 jhomes369 189 Sep 20 10:18 google_compute_known_hosts
-rw------- 1 jhomes369 jhomes369 0 Oct 4 10:55 mgmt-cluster-2.key
How to connect to GKE cluster from me Ubuntu laptop?
You can connect to the GKE cluster from your Ubuntu and local machine with a gcloud command :
gcloud container clusters get-credentials your-gke --region europe-west1 --project your-project
You need to use an authorized identity from you shell session (Google user or Service Account).

GCP cloudbuild react-scripts build don't find env.file

I'm doing something I thought was simple:
# Fetch config
- name: 'gcr.io/cloud-builders/gsutil'
volumes:
- name: 'vol1'
path: '/persistent_volume'
args: [ 'cp', 'gs://servicesconfig/devs/react-app/env.server', '/persistent_volume/env.server' ]
# Install dependencies
- name: node:$_NODE_VERSION
entrypoint: 'yarn'
args: [ 'install' ]
# Build project
- name: node:$_NODE_VERSION
volumes:
- name: 'vol1'
path: '/persistent_volume'
entrypoint: 'bash'
args:
- -c
- |
cp /persistent_volume/env.server .env.production &&
cat .env.production &&
ls -la &&
yarn run build:prod
while in my package.json:
"build:prod": "sh -ac '. .env.production; react-scripts build'",
All of this works well in local but the output in gcp cloud build:
Already have image: node:14
REACT_APP_ENV="sandbox"
REACT_APP_CAPTCHA_ENABLED=true
REACT_APP_CAPTCHA_PUBLIC_KEY="akey"
REACT_APP_DEFAULT_APP="home-btn"
REACT_APP_API_URL="akey2"
REACT_APP_STRIPE_KEY="akey3"
REACT_APP_COGNITO_POOL_ID="akey4"
REACT_APP_COGNITO_APP_ID="akey5"
total 2100
drwxr-xr-x 6 root root 4096 Feb 25 12:15 .
drwxr-xr-x 1 root root 4096 Feb 25 12:15 ..
-rw-r--r-- 1 root root 382 Feb 25 12:15 .env.production <- it's here!
drwxr-xr-x 8 root root 4096 Feb 25 12:13 .git
-rw-r--r-- 1 root root 230 Feb 25 12:13 .gitignore
-rw-r--r-- 1 root root 371 Feb 25 12:13 Dockerfile
-rw-r--r-- 1 root root 3787 Feb 25 12:13 README.md
-rw-r--r-- 1 root root 1019 Feb 25 12:13 cloudbuild.yaml
drwxr-xr-x 1089 root root 36864 Feb 25 12:14 node_modules
-rw-r--r-- 1 root root 1580131 Feb 25 12:13 package-lock.json
-rw-r--r-- 1 root root 1896 Feb 25 12:13 package.json
drwxr-xr-x 2 root root 4096 Feb 25 12:13 public
drwxr-xr-x 9 root root 4096 Feb 25 12:13 src
-rw-r--r-- 1 root root 535 Feb 25 12:13 tsconfig.json
-rw-r--r-- 1 root root 478836 Feb 25 12:13 yarn.lock
/workspace
yarn run v1.22.17
$ sh -ac '. .env.production; react-scripts build'
sh: 1: .: .env.production: not found
error Command failed with exit code 2.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
I'm unsure if I'm doing something completely wrong or if it's a bug on GCP side?
Alright, I'm not expert enough into bash and zh documentation to understand what the issue is, but I ended up solving it.
One thing to pay attention to:
everything is actually shared between raw steps in cloudbuild, no need for a volume or any specific path
So on the cloudbuild side I changed the yaml to reflect:
- name: node:$_NODE_VERSION
entrypoint: 'bash'
args:
- -c
- |
mv env.server .env.production &&
yarn run build:prod
And on the package.json I'm now using an extra lib env-cmd
which changes the build command to:
"build:prod": "env-cmd -f .env.production react-scripts build",
this works like a charm.
I'm a bit annoyed I had to add another lib for this but, well.

Terraform aws Provider issue

Can anyone suggest how to fix this issue?
Initializing provider plugins...
terraform.io/builtin/terraform is built in to Terraform
Finding hashicorp/aws versions matching "~> 3.56.0"...
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider
│ hashicorp/aws: provider registry.terraform.io/hashicorp/aws was not found
│ in any of the search locations
│
│ - /var/terraform/terraform-provider-aws-3.56.0/
drwxr-xr-x. 1 root root 4096 Oct 15 03:21 terraform-provider-aws-3.56.0
[root#7c3369092d09 terraform]# pwd
/var/terraform
[root#7c3369092d09 terraform]#
[root#7c3369092d09 terraform]# cd terraform-provider-aws-3.56.0/
[root#7c3369092d09 terraform-provider-aws-3.56.0]# ll
total 572
drwxr-xr-x. 5 root root 151552 Aug 26 22:30 aws
drwxr-xr-x. 5 root root 4096 Aug 26 22:30 awsproviderlint
-rw-r--r--. 1 root root 275106 Aug 26 22:30 CHANGELOG.md
drwxr-xr-x. 4 root root 4096 Aug 26 22:30 docs
drwxr-xr-x. 27 root root 4096 Aug 26 22:30 examples
-rw-r--r--. 1 root root 6140 Aug 26 22:30 GNUmakefile
-rw-r--r--. 1 root root 1137 Aug 26 22:30 go.mod
-rw-r--r--. 1 root root 62213 Aug 26 22:30 go.sum
drwxr-xr-x. 3 root root 4096 Aug 26 22:30 infrastructure
-rw-r--r--. 1 root root 16725 Aug 26 22:30 LICENSE
-rw-r--r--. 1 root root 580 Aug 26 22:30 main.go
-rw-r--r--. 1 root root 2067 Aug 26 22:30 README.md
-rw-r--r--. 1 root root 7105 Aug 26 22:30 ROADMAP.md
drwxr-xr-x. 2 root root 4096 Aug 26 22:30 scripts
-rw-r--r--. 1 root root 86 Aug 26 22:30 staticcheck.conf
drwxr-xr-x. 2 root root 4096 Aug 26 22:30 tools
drwxr-xr-x. 2 root root 4096 Aug 26 22:30 version
drwxr-xr-x. 3 root root 4096 Aug 26 22:30 website
[root#7c3369092d09 terraform-provider-aws-3.56.0]#
I am using creating docker image/container to install the terraform and do the operation on it:
Docker file snippet:
It will download the provider installer from our repo i.e. name terraform-provider-aws_3.56.0_linux_amd64
RUN wget $FILES_REPO_LOCAL/sd.svtstand.files/distributive/terraform/providers/terraform-provider-aws_${PROVIDER_VERSION}_linux_amd64.zip \
&& unzip terraform-provider-aws_*.zip -d /tmp \
&& rm terraform-provider-aws_*.zip \
&& mkdir -p /opt/terraform/plugins/registry.terraform.io/hashicorp/aws/${PROVIDER_VERSION}/linux_amd64 \
&& mv /tmp/terraform-provider-aws_* /opt/terraform/plugins/registry.terraform.io/hashicorp/aws/${PROVIDER_VERSION}/linux_amd64 \
&& chmod 775 /opt/terraform/plugins/registry.terraform.io/hashicorp/aws/${PROVIDER_VERSION}/linux_amd64
This will initialize the terraform
terraform init -plugin-dir=/opt/terraform/plugins
#Provider block
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">=3.35.0"
}
}
}
This is the output:
2021-10-18T13:47:01.766Z [DEBUG] will search for provider plugins in [/opt/terraform/plugins] Initializing provider plugins... - terraform.io/builtin/terraform is built in to Terraform - Finding hashicorp/aws versions matching "~> 3.62.0"... - Installing hashicorp/aws v3.62.0...
Output in the container: [root#bf34ab9c5277 aws]# pwd /opt/terraform/plugins/registry.terraform.io/hashicorp/aws
[root#bf34ab9c5277 aws]# ll
total 0 drwxr-xr-x 3 root root 25 Oct 18 12:59 3.62.0
[root#bf34ab9c5277 aws]#

aws cli command not working when installed local folder

I installed the aws-cli program as mentioned in this doc without sudo to local folder. But when I try to run the command line utility I get error.
$ ./aws/install -i awscli-app -b awscli-bin
$ ls awscli-bin/
aws aws_completer
$ ls -alh awscli-bin
total 8.0K
drwxrwxr-x 2 prod prod 4.0K Jun 22 14:14 .
drwxr-xr-x 16 prod prod 4.0K Jun 22 14:14 ..
lrwxrwxrwx 1 prod prod 29 Jun 22 14:14 aws -> awscli-app/v2/current/bin/aws
lrwxrwxrwx 1 prod prod 39 Jun 22 14:14 aws_completer -> awscli-app/v2/current/bin/aws_completer
$ ./awscli-bin/aws --version
-bash: ./awscli-bin/aws: No such file or directory
What am I missing here, can anyone help me ?
EDIT
$ ls -alh awscli-app/v2/current
lrwxrwxrwx 1 prod prod 20 Jun 22 14:14 awscli-app/v2/current -> awscli-app/v2/2.0.24
$ ls -alh awscli-app/v2/2.0.24
total 16K
drwxrwxr-x 4 prod prod 4.0K Jun 22 14:14 .
drwxrwxr-x 3 prod prod 4.0K Jun 22 14:14 ..
drwxrwxr-x 2 prod prod 4.0K Jun 22 14:14 bin
drwxr-xr-x 11 prod prod 4.0K Jun 22 14:14 dist
EDIT2
$ ./awscli-app/v2/2.0.24/bin/aws --version
aws-cli/2.0.24 Python/3.7.3 Linux/3.13.0-63-generic botocore/2.0.0dev28
$ ls -alh awscli-app/v2/2.0.24/bin
total 8.0K
drwxrwxr-x 2 prod prod 4.0K Jun 22 14:14 .
drwxrwxr-x 4 prod prod 4.0K Jun 22 14:14 ..
lrwxrwxrwx 1 prod prod 11 Jun 22 14:14 aws -> ../dist/aws
lrwxrwxrwx 1 prod prod 21 Jun 22 14:14 aws_completer -> ../dist/aws_completer
TLDR; Using relative paths instead of a full paths seemed to be root cause (didn't look at the installer to determine why):
relative paths attempts:
$ ./aws/install --bin-dir aws-bin --install-dir aws-cli
$ ls -al aws-bin
total 8
drwxr-xr-x 2 luser luser 4096 Oct 26 02:05 .
drwxrwxrwx 20 root root 4096 Oct 26 02:05 ..
lrwxrwxrwx 1 luser luser 26 Oct 26 02:05 aws -> aws-cli/v2/current/bin/aws
lrwxrwxrwx 1 luser luser 36 Oct 26 02:05 aws_completer -> aws-cli/v2/current/bin/aws_completer
$ ls -al aws-cli/v2/
total 12
drwxr-xr-x 3 luser luser 4096 Oct 26 02:05 .
drwxr-xr-x 3 luser luser 4096 Oct 26 02:05 ..
drwxr-xr-x 4 luser luser 4096 Oct 26 02:05 2.8.5
lrwxrwxrwx 1 luser luser 16 Oct 26 02:05 current -> aws-cli/v2/2.8.5
$ ./aws/install --bin-dir ./aws-bin --install-dir ./aws-cli
lrwxrwxrwx 1 luser luser 26 Oct 26 02:07 aws -> ./aws-cli/v2/current/bin/aws
<...>
absolute paths:
$ CURRENT_DIR=$( pwd )
$ ./aws/install --bin-dir ${CURRENT_DIR}/aws-bin --install-dir ${CURRENT_DIR}/aws-cli
$ ls -al aws-bin
total 16
drwxr-xr-x 2 luser luser 4096 Oct 26 02:14 .
drwxrwxrwx 20 root root 4096 Oct 26 02:14 ..
lrwxrwxrwx 1 luser luser 77 Oct 26 02:14 aws -> ${CURRENT_DIR}/aws-cli/v2/current/bin/aws
lrwxrwxrwx 1 luser luser 87 Oct 26 02:14 aws_completer -> ${CURRENT_DIR}/aws-cli/v2/current/bin/aws_completer
$ ls -al aws-cli/v2/
total 16
drwxr-xr-x 3 luser luser 4096 Oct 26 02:14 .
drwxr-xr-x 3 luser luser 4096 Oct 26 02:14 ..
drwxr-xr-x 4 luser luser 4096 Oct 26 02:14 2.8.5
lrwxrwxrwx 1 luser luser 67 Oct 26 02:14 current -> ${CURRENT_DIR}/aws-cli/v2/2.8.5
Using relative paths makes relative sym links, and that's a no-go. Hopefully this helps someone in future years :)

AWS CodeDeploy script exited with code 127

This is my first time using AWS CodeDeploy and I'm having problems creating my appspec.yml file.
This is the error I'm getting:
2019-02-16 19:28:06 ERROR [codedeploy-agent(3596)]:
InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller:
Error during perform:
InstanceAgent::Plugins::CodeDeployPlugin::ScriptError -
Script at specified location: deploy_scripts/install_project_dependencies
run as user root failed with exit code 127 -
/opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/hook_executor.rb:183:in `execute_script'
This is my appspec.yml file
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html/admin_panel_backend
hooks:
BeforeInstall:
- location: deploy_scripts/install_dependencies
timeout: 300
runas: root
- location: deploy_scripts/start_server
timeout: 300
runas: root
AfterInstall:
- location: deploy_scripts/install_project_dependencies
timeout: 300
runas: root
ApplicationStop:
- location: deploy_scripts/stop_server
timeout: 300
runas: root
And this is my project structure
drwxr-xr-x 7 501 20 224 Feb 6 20:57 api
-rw-r--r-- 1 501 20 501 Feb 16 16:29 appspec.yml
-rw-r--r-- 1 501 20 487 Feb 14 21:54 bitbucket-pipelines.yml
-rw-r--r-- 1 501 20 3716 Feb 14 20:43 codedeploy_deploy.py
drwxr-xr-x 4 501 20 128 Feb 6 20:57 config
-rw-r--r-- 1 501 20 1047 Feb 4 22:56 config.yml
drwxr-xr-x 6 501 20 192 Feb 16 16:25 deploy_scripts
drwxr-xr-x 264 501 20 8448 Feb 6 17:40 node_modules
-rw-r--r-- 1 501 20 101215 Feb 6 20:57 package-lock.json
-rw-r--r-- 1 501 20 580 Feb 6 20:57 package.json
-rw-r--r-- 1 501 20 506 Feb 4 08:50 server.js
And deploy_scripts folder
-rwxr--r-- 1 501 20 50 Feb 14 22:54 install_dependencies
-rwxr--r-- 1 501 20 61 Feb 16 16:25 install_project_dependencies
-rwxr--r-- 1 501 20 32 Feb 14 22:44 start_server
-rwxr--r-- 1 501 20 31 Feb 14 22:44 stop_server
This is my install_project_dependencies script
#!/bin/bash
cd /var/www/html/admin_panel_backend
npm install
All the other scripts are working ok, but this one (install_project_dependencies).
Thanks you all
After reading a lot! I realized I was having the same problem as NPM issue deploying a nodejs instance using AWS codedeploy , I didn't have my PATH variable set.
So leaving my start_script as this worked fine!
#!/bin/bash
source /root/.bash_profile
cd /var/www/html/admin_panel_backend
npm install
Thanks!
I had the exact same problem because npm was installed for EC2-user and not for root. I solved it by adding this line to my install_dependencies script.
su - ec2-user -c 'cd /usr/local/nginx/html/node && npm install'
You can replace your npm install line with the line above to install as your user.