Gcloud CoreOs Cloud Config not having effect - google-cloud-platform

I am trying to create a coreos instance on google cloud and it seems to be ignoring my cloud-config
here is my terminal command for setting up the gcloud coreos instance.
gcloud compute instances create gfb-core-1 --zone europe-west1-b --machine-type n1-standard-1 --metadata-from-file user-data=conductor/coreos/cloud-config-gcloud.ym
I have below a sample of my cloud config.
#cloud-config
coreos:
units:
- name: sample.service
command: start
enable: true
content: |
[Unit]
Description=Sample Service.
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
EnvironmentFile=/etc/environment
ExecStart=/opt/bin/docker-compose start;
ExecStop=/opt/bin/docker-compose stop;
[Install]
WantedBy=multi-user.target
- name: backup.service
enable: true
content: |
[Unit]
Description=Sample BackUp Script
[Service]
Type=oneshot
ExecStart=/usr/bin/docker exec db-live /backup-db.sh
- name: backup.timer
command: start
enable: true
content: |
[Unit]
Description=Runs Sample BackUp twice a day
[Timer]
OnCalendar=*-*-* 0/12:00:00
# References for timers https://www.freedesktop.org/software/systemd/man/systemd.time.html# units:
- name: media-backup.mount
command: start
enable: true
content: |
[Mount]
What=/dev/disk/by-id/google-core-disk-1
Where=/app
Type=ext3
write_files:
- path: /etc/environment
permissions: 420
content: |
COMPOSE_FILE=/path/to/app/docker-compose.yml
- path: /home/core/.bashrc
permissions: 420
owner: core:core
content: |
# source <(sudo cat /etc/environment)
eval $(sudo cat /etc/environment | sed 's/^/export /')

Cloud-configs use indention for structure/hierarchy and the file you shared is intended incorrectly. Was that a typo for sharing or is it actually like that?
Try out https://coreos.com/validate/ to find if your config is valid or not.

Related

Using worker identity with gcloud

I have 2 docker imags with gcloud sdk and my entrypoint script performs some checks using gcloud, like following
gcloud pubsub subscriptions describe $GCP_SUB_NAME --quiet
result="$?"
if [ "$result" -ne 0 ]; then
echo "Subscription not found, exited with non-zero status $result"
exit $result
fi
I am running these in gke...
I have a different GCP Service Account for each docker image which is connected to GKE Service Account using workload-identity.
My problem is that both deployments don't succeed at the same time. The one which runs first succeeds and other fails with following error. Something to do with the gke/GCP credentials.
I get following error
gcloud pubsub subscriptions describe local-test-v1 --quiet
ERROR: (gcloud.pubsub.subscriptions.describe) You do not currently have an active account selected.
Please run:
$ gcloud auth login
to obtain new credentials.
If you have already logged in with a different account:
$ gcloud config set account ACCOUNT
to select an already authenticated account to use.
Even if I make following changes I don't get it through
gcloud config set account sa#project.iam.gserviceaccount.com
gcloud pubsub subscriptions describe $GCP_SUB_NAME --quiet
result="$?"
if [ "$result" -ne 0 ]; then
echo "Subscription not found, exited with non-zero status $result"
exit $result
fi
Error I get now
gcloud config set account sa#project.iam.gserviceaccount.com
Updated property [core/account].
+ gcloud pubsub subscriptions describe local-test-v1 --quiet
ERROR: (gcloud.pubsub.subscriptions.describe) Your current active account [sa#project.iam.gserviceaccount.com] does not have any valid credentials
Please run:
$ gcloud auth login
to obtain new credentials.
For service account, please activate it first:
$ gcloud auth activate-service-account ACCOUNT
I don't wanna use the GCP client libraries as I want to keep it light weight so either gcloud r curl r the best option.
Can I use gcloud in GKE without the key file?
Can I call googleapis via curl without passing bearer token or how shall I get that in the docker container?
Any ideas... Thanks...
Note#1: workload identity
resource "google_service_account_iam_member" "workload_identity_iam" {
member = "serviceAccount:${var.gcp_project}.svc.id.goog[${var.kubernetes_namespace}/${var.kubernetes_service_account_name}]"
role = "roles/iam.workloadIdentityUser"
service_account_id = google_service_account.sa.name
depends_on = [google_project_iam_member.pubsub_subscriber_iam, google_project_iam_member.bucket_object_admin_iam] }
Note#2: GKE SAs
Name: sa1
Namespace: some-namespace
Labels: <none>
Annotations: iam.gke.io/gcp-service-account: sa1#project.iam.gserviceaccount.com
Image pull secrets: <none>
Mountable secrets: sa1-token-shj9w
Tokens: sa1-token-shj9w
Events: <none>
Name: sa2
Namespace: some-namespace
Labels: <none>
Annotations: iam.gke.io/gcp-service-account: sa2#project.iam.gserviceaccount.com
Image pull secrets: <none>
Mountable secrets: sa2-token-dkhdl
Tokens: sa2-token-dkhdl
Events: <none>
Note#3: job template for container
apiVersion: batch/v1
kind: Job
metadata:
namespace: some-namespace
name: check
labels:
helm.sh/chart: check-0.1.0
app.kubernetes.io/name: check
app.kubernetes.io/instance: check
app: check
app.kubernetes.io/version: "0.1.0"
app.kubernetes.io/managed-by: Helm
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-weight: "-4"
spec:
backoffLimit: 1
completions: 1
parallelism: 1
template:
metadata:
name: check
labels:
app.kubernetes.io/name: check
app.kubernetes.io/instance: check
app: check
spec:
restartPolicy: Never
terminationGracePeriodSeconds: 0
serviceAccountName: sa1
securityContext:
{}
containers:
- name: check
securityContext:
{}
image: "eu.gcr.io/some-project/check:500c4166"
imagePullPolicy: Always
env:
# Define the environment variable
- name: GCP_PROJECT_ID
valueFrom:
configMapKeyRef:
name: check
key: gcpProjectID
- name: GCP_SUB
valueFrom:
configMapKeyRef:
name: check
key: gcpSubscriptionName
- name: GCP_BUCKET
valueFrom:
configMapKeyRef:
name: check
key: gcpBucket
resources:
limits:
cpu: 1000m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
Docker image:
FROM ubuntu:18.04
COPY /checks/pre/ /checks/pre/
ENV HOME /checks/pre/
# Install needed packages
RUN apt-get update && \
apt-get -y install --no-install-recommends curl \
iputils-ping \
tar \
jq \
python \
ca-certificates \
&& mkdir -p /usr/local/gcloud && cd /usr/local/gcloud \
&& curl -o google-cloud-sdk.tar.gz -L -O https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz \
&& tar -xzf google-cloud-sdk.tar.gz \
&& rm -f google-cloud-sdk.tar.gz \
&& ./google-cloud-sdk/install.sh --quiet \
&& mkdir -p /.config/gcloud && chmod 775 -R /checks/pre /.config/gcloud \
&& apt-get autoclean \
&& apt-get autoremove \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
WORKDIR /checks/pre
USER 1001
ENTRYPOINT [ "/checks/pre/entrypoint.sh" ]

How to access GSM secrets through Cloud Build and pass to Cloud Function

How does one pass a secret from Google Secrets Manager (GSM) to a Cloud Function when using Cloud Build? The below cloudbuild.yaml has three steps. Further, I'm using volumes to create permanent storage between build steps. I can confirm GSM retrieval by Cloud Build. However, when I attempt to pass a secret in yaml format using --env-vars-file I encounter the following error ...
Already have image (with digest): gcr.io/cloud-builders/gcloud
ERROR: gcloud crashed (AttributeError): 'str' object has no attribute 'items'
cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
volumes:
- name: 'secrets'
path: '/secrets'
entrypoint: "bash"
args:
- "-c"
- |
echo -n 'gsm_secret:' > /secrets/my-secret-file.txt
- name: 'gcr.io/cloud-builders/gcloud'
volumes:
- name: 'secrets'
path: '/secrets'
entrypoint: "bash"
args:
- "-c"
- |
gcloud components update
gcloud beta secrets versions access --secret=MySecret latest >> /secrets/my-secret-file.txt
cat /secrets/my-secret-file.txt
- name: 'gcr.io/cloud-builders/gcloud'
volumes:
- name: 'secrets'
path: '/secrets'
args: [
'functions', 'deploy', 'gsm-foobar',
'--project=[...]',
'--trigger-http',
'--runtime=go111',
'--region=us-central1',
'--memory=256MB',
'--timeout=540',
'--entry-point=GSM',
'--allow-unauthenticated',
'--source=https://source.developers.google.com/[...]',
'--service-account', '[...]#appspot.gserviceaccount.com',
'--env-vars-file', '/secrets/my-secret-file.txt'
]
Update:
Usage of volumes is not required as /workspace is permanent storage between steps in Cloud Build. Also, gcloud components update is no longer necessary as the default Cloud SDK version, as of today, is 279.0.0
A Solution:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: "bash"
args:
- "-c"
- |
echo "gsm_secret: $(gcloud beta secrets versions access --secret=MySecret latest)" > /workspace/my-secret-file.txt
cat /workspace/my-secret-file.txt
- name: 'gcr.io/cloud-builders/gcloud'
args: [
'functions', 'deploy', 'gsm-foobar',
[...]
'--entry-point=GSM',
'--allow-unauthenticated',
'--source=https://source.developers.google.com/[...]',
'--service-account', '[...]#appspot.gserviceaccount.com',
'--env-vars-file=/workspace/my-secret-file.txt'
]
On second read, I realize your 2nd step puts the secret value in the file. I think you're missing the newline.
NB I've not tried this for myself!
Ensure you have a newline at the end of your secrets file.
See: https://cloud.google.com/functions/docs/env-var
Update: tried it ;-)
I think your issue was the final newline.
Using the following in a step prior to the deployment, works:
echo "gsm_secret: $(gcloud beta secrets versions access --secret=MySecret latest)" > /secrets/my-secret-file.txt
Or, more simply, perhaps:
steps:
- name: "gcr.io/cloud-builders/gcloud"
entrypoint: /bin/bash
args:
- "-c"
- |
gcloud functions deploy ... \
--set-env-vars=NAME=$(gcloud beta secrets versions access --secret=name latest)
Also, see secretEnv. This is a more elegant mechanism..This functionality should perhaps be augmented by Google to support secret manager (in addition to KMS).
As of 2021 February 10, you can access Secret Manager secrets directly from Cloud Build using the availableSecrets field:
steps:
- id: 'deploy'
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- 'gcloud functions deploy --set-env-vars=SECRET=$$MY_SECRET'
secretEnv: ['MY_SECRET']
availableSecrets:
secretManager:
- versionName: 'projects/my-project/secrets/my-secret/versions/latest'
env: 'MY_SECRET'
Documentation

Running same Docker image with different flags on AWS and saving results

I have an experiment I'd like to run 100 different times, each with a command line flag set to a different integer value. Each experiment will output the result to a text file. Experiments take about 2 hours each and are independent of each other.
I currently have a Docker image that can run the experiment when provided the command line flag.
I am curious if there is a way to write a script that can launch 100 AWS instances (one for each possible flag value), run the Docker image, and then output the result to a shared text file somewhere. Is this possible? I am very inexperienced with AWS so I'm not sure if this is the proper tool or what steps would be required (besides building the Docker image).
Thanks.
You could do this using vagrant with the vagrant-aws plugin to spin up the instances and the Docker Provisioner to pull your images / run your containers or the Ansible Provisioner. For example:
.
├── playbook.yml
└── Vagrantfile
The Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
N = 100
(1..N).each do |server_id|
config.vm.box = "dummy"
config.ssh.forward_agent = true
config.vm.define "server#{server_id}" do |server|
server.vm.provider :aws do |aws, override|
aws.access_key_id = ENV["AWS_ACCESS_KEY_ID"]
aws.secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
aws.instance_type = "t2.micro"
aws.block_device_mapping = [
{
"DeviceName" => "/dev/sda1",
"Ebs.VolumeSize" => 30
}
]
aws.tags = {
"Name" => "node#{server_id}.example.com",
"Environment" => "stage"
}
aws.subnet_id = "subnet-d65893b0"
aws.security_groups = [
"sg-deadbeef"
]
aws.region = "eu-west-1"
aws.region_config "eu-west-1" do |region|
region.ami = "ami-0635ad49b5839867c"
region.keypair_name = "ubuntu"
end
aws.monitoring = true
aws.associate_public_ip = false
aws.ssh_host_attribute = :private_ip_address
override.ssh.username = "ubuntu"
override.ssh.private_key_path = ENV["HOME"] + "/.ssh/id_rsa"
override.ssh.forward_agent = true
end
if server_id == N
server.vm.provision :ansible do |ansible|
ansible.limit = "all"
ansible.playbook = "playbook.yml"
ansible.compatibility_mode = "2.0"
ansible.raw_ssh_args = "-o ForwardAgent=yes"
ansible.extra_vars = {
"ansible_python_interpreter": "/usr/bin/python3"
}
end
end
end
end
end
Note: this example does ansible parallel execution from the Tips & Tricks.
The ansible playbook.yml:
- hosts: all
pre_tasks:
- name: get instance facts
local_action:
module: ec2_instance_facts
filters:
private-dns-name: '{{ ansible_fqdn }}'
"tag:Environment": stage
register: _ec2_instance_facts
- name: add route53 entry
local_action:
module: route53
state: present
private_zone: yes
zone: 'example.com'
record: '{{ _ec2_instance_facts.instances[0].tags["Name"] }}'
type: A
ttl: 7200
value: '{{ _ec2_instance_facts.instances[0].private_ip_address }}'
wait: yes
overwrite: yes
tasks:
- name: install build requirements
apt:
name: ['python3-pip', 'python3-socks', 'git']
state: present
update_cache: yes
become: true
- name: apt install docker requirements
apt:
name: ['apt-transport-https', 'ca-certificates', 'curl', 'gnupg-agent', 'software-properties-common']
state: present
become: true
- name: add docker apt key
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
become: true
- name: add docker apt repository
apt_repository:
repo: 'deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable'
state: present
become: true
- name: apt install docker-ce
apt:
name: ['docker-ce', 'docker-ce-cli', 'containerd.io']
state: present
update_cache: yes
become: true
- name: get docker-compose
get_url:
url: 'https://github.com/docker/compose/releases/download/1.24.1/docker-compose-{{ ansible_system }}-{{ ansible_userspace_architecture }}'
dest: /usr/local/bin/docker-compose
mode: '0755'
become: true
- name: pip install docker and boto3
pip:
name: ['boto3', 'docker', 'docker-compose']
executable: pip3
- name: create docker config directory
file:
path: /etc/docker
state: directory
become: true
- name: copy docker daemon.json
copy:
content: |
{
"group": "docker",
"log-driver": "journald",
"live-restore": true,
"experimental": true,
"insecure-registries" : [],
"features": { "buildkit": true }
}
dest: /etc/docker/daemon.json
become: true
- name: enable docker service
service:
name: docker
enabled: yes
become: true
- name: add ubuntu user to docker group
user:
name: ubuntu
groups: docker
append: yes
become: true
- name: restart docker daemon
systemd:
state: restarted
daemon_reload: yes
name: docker
no_block: yes
become: true
# pull your images then run your containers
The only approach that I can think of is using AWS SSM to run multiple commands but still, you might need to spin 100's of instances and that would not be the good approach.
Below are the set of commands you can use :
Spin instance using below Cloudformation template, run it in loop to create multiple instances :
---
Resources:
MyInstance:
Type: AWS::EC2::Instance
Properties:
AvailabilityZone: <region>
ImageId: <amiID>
InstanceType: t2.micro
KeyName : <KeyName>
Use below command to get the intance ID :
aws ec2 describe-instances --filters 'Name=tag:Name,Values=EC2' --query 'Reservations[*].Instances[*].InstanceId' --output text
Using that instance-id, run below command :
aws ssm send-command --instance-ids "<instanceID>" --document-name "AWS-RunShellScript" --comment "<COMMENT>" --parameters commands='sudo yum update -y' --output text
I don't think docker will be of any help here as that would complicate things for you due to SSM agent installation. So your best bet would be running commands one by one and finally storing your output in S3.

IAM based ssh to EC2 instance using CloudFormation template

I am using an AWS CloudFormation template for IAM role-based access to an EC2 instance.
I getting permission denied error while running the template, and I am not able to access the EC2 machine with a username without a pem file.
Instance:
Type: 'AWS::EC2::Instance'
Metadata:
'AWS::CloudFormation::Init':
config:
files:
/opt/authorized_keys_command.sh:
content: >
#!/bin/bash -e
if [ -z "$1" ]; then
exit 1
fi
SaveUserName="$1"
SaveUserName=${SaveUserName//"+"/".plus."}
SaveUserName=${SaveUserName//"="/".equal."}
SaveUserName=${SaveUserName//","/".comma."}
SaveUserName=${SaveUserName//"#"/".at."}
aws iam list-ssh-public-keys --user-name "$SaveUserName" --query
"SSHPublicKeys[?Status == 'Active'].[SSHPublicKeyId]" --output
text | while read KeyId; do
aws iam get-ssh-public-key --user-name "$SaveUserName" --ssh-public-key-id "$KeyId" --encoding SSH --query "SSHPublicKey.SSHPublicKeyBody" --output text
done
mode: '000755'
owner: root
group: root
/opt/import_users.sh:
content: >
#!/bin/bash
aws iam list-users --query "Users[].[UserName]" --output text |
while read User; do
SaveUserName="$User"
SaveUserName=${SaveUserName//"+"/".plus."}
SaveUserName=${SaveUserName//"="/".equal."}
SaveUserName=${SaveUserName//","/".comma."}
SaveUserName=${SaveUserName//"#"/".at."}
if id -u "$SaveUserName" >/dev/null 2>&1; then
echo "$SaveUserName exists"
else
#sudo will read each file in /etc/sudoers.d, skipping file names that end in ?~? or contain a ?.? character to avoid causing problems with package manager or editor temporary/backup files.
SaveUserFileName=$(echo "$SaveUserName" | tr "." " ")
/usr/sbin/adduser "$SaveUserName"
echo "$SaveUserName ALL=(ALL) NOPASSWD:ALL" > "/etc/sudoers.d/$SaveUserFileName"
fi
done
mode: '000755' owner: root group: root
/etc/cron.d/import_users:
content: |
*/10 * * * * root /opt/import_users.sh
mode: '000644' owner: root
group: root
/etc/cfn/cfn-hup.conf:
content: !Sub |
[main]
stack=${AWS::StackId}
region=${AWS::Region}
interval=1
mode: '000400' owner: root group: root
/etc/cfn/hooks.d/cfn-auto-reloader.conf:
content: !Sub >
[cfn-auto-reloader-hook]
triggers=post.update
path=Resources.Instance.Metadata.AWS::CloudFormation::Init
action=/opt/aws/bin/cfn-init --verbose
--stack=${AWS::StackName} --region=${AWS::Region}
--resource=Instance
runas=root
commands:
a_configure_sshd_command:
command: >-
sed -i 's:#AuthorizedKeysCommand none:AuthorizedKeysCommand
/opt/authorized_keys_command.sh:g' /etc/ssh/sshd_config
b_configure_sshd_commanduser:
command: >-
sed -i 's:#AuthorizedKeysCommandUser
nobody:AuthorizedKeysCommandUser nobody:g' /etc/ssh/sshd_config
c_import_users:
command: ./import_users.sh
cwd: /opt
services:
sysvinit:
cfn-hup:
enabled: true
ensureRunning: true
files:
- /etc/cfn/cfn-hup.conf
- /etc/cfn/hooks.d/cfn-auto-reloader.conf
sshd:
enabled: true
ensureRunning: true
commands:
- a_configure_sshd_command
- b_configure_sshd_commanduser
'AWS::CloudFormation::Designer':
id: 85ddeee0-0623-4f50-8872-1872897c812f
Properties:
ImageId: !FindInMap
- RegionMap
- !Ref 'AWS::Region'
- AMI
IamInstanceProfile: !Ref InstanceProfile
InstanceType: t2.micro
UserData:
'Fn::Base64': !Sub >
#!/bin/bash -x
/opt/aws/bin/cfn-init --verbose --stack=${AWS::StackName}
--region=${AWS::Region} --resource=Instance
/opt/aws/bin/cfn-signal --exit-code=$? --stack=${AWS::StackName}
--region=${AWS::Region} --resource=Instance
This User Data script will configure a Linux instance to use password authentication.
While the password here is hard-coded, you could obtain it in other ways and set it to the appropriate value.
#!
echo 'secret-password' | passwd ec2-user --stdin
sed -i 's|[#]*PasswordAuthentication no|PasswordAuthentication yes|g' /etc/ssh/sshd_config
systemctl restart sshd.service

aws-iam-authenticator install via ansible

Looking how to translate (properly) from a bash command (orig inside a Dockerfile) to ansible task/role that will download latest aws-iam-authenticator binary and install into /usr/local/bin on Ubuntu (x64) OS.
currently I have:
curl -s https://api.github.com/repos/kubernetes-sigs/aws-iam-authenticator/releases/latest | grep "browser_download.url.*linux_amd64" | cut -d : -f 2,3 | tr -d '"' | wget -O /usr/local/bin/aws-iam-authenticator -qi - && chmod 555 /usr/local/bin/aws-iam-authenticator
Basically you need to write a playbook and separate that command in various tasks
Example example.yml file
- hosts: localhost
tasks:
- shell: |
curl -s https://api.github.com/repos/kubernetes-sigs/aws-iam-authenticator/releases/latest
register: json
- set_fact:
url: "{{ (json.stdout | from_json).assets[2].browser_download_url }}"
- get_url:
url: "{{ url }}"
dest: /usr/local/bin/aws-iam-authenticator-ansible
mode: 0555
you can execute it by doing
ansible-playbook --become example.yml
I hope this is what you're looking for ;-)
So after finding other posts that gave strong hints, information and unresolved issues, Ansible - Download latest release binary from Github repo & https://github.com/ansible/ansible/issues/27299#issuecomment-331068246. I was able to come up with the following ansible task that works for me.
- name: Get latest url for linux-amd64 release for aws-iam-authenticator
uri:
url: https://api.github.com/repos/kubernetes-sigs/aws-iam-authenticator/releases/latest
return_content: true
body_format: json
register: json_response
- name: Download and install aws-iam-authenticator
get_url:
url: " {{ json_response.json | to_json | from_json| json_query(\"assets[?ends_with(name,'linux_amd64')].browser_download_url | [0]\") }}"
mode: 555
dest: /usr/local/bin/aws-iam-authenticator
Note
If you're running the AWS CLI version 1.16.156 or later, then you don't need to install the authenticator. Instead, you can use the aws eks get-token command. For more information, see Create kubeconfig manually.