Using CloudFoundry, is there a way to define a custom DNS search so host names are resolved?
We are using an Ubuntu stem cell and need to reach out to an external server. Using a FQDN, this works, but would prefer to use the host name only. Generally, this is in resolve.conf on a Unix/Linux box but wasn't sure how to define this in CloudFoundry.
One option here would be a Bosh add-on. A Bosh add-on will run on all VMs managed by your Bosh Director. Here are some example add-ons.
You'll want to use the os-conf-release for your add-on. It has a job called search_domain which lets you set the search domain on all of the Bosh deployed VMs.
I haven't tested it, but I believe a manifest like this should work.
releases:
- name: os-conf
version: 12
addons:
- name: search-domain
jobs:
- name: search_domain
release: os-conf
properties:
search_domain: my.domain.com
That would add my.domain.com to the list of search domains in resolv.conf. Hope that helps!
Related
(Warning, Newbie here) I’m learning Packer by building a VM. I followed links to cloud-builders-community/packer example. Unfortunately this seems to be out of date. It pushes the output to gcr.io … which I’m discovering is being deprecated in favour of Artifact Registry. It’s also using YAML instead of HCL2.
Is this old code and is there an up to date equivalent somewhere else?
Assuming I can or should continue using this sample code…
I’m confused about a couple things. Artifact Registry : Create Repository has options for Docker, Maven, etc. but does not have an option for VM images. Do I just choose Docker?
Then in cloud-builders-community/packer/cloudbuild.yaml what path do I use to replace gcr.io? gcr.io appears multiple times.
From: https://github.com/GoogleCloudPlatform/cloud-builders-community/packer/cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/wget'
args: ["https://releases.hashicorp.com/packer/${_PACKER_VERSION}/packer_${_PACKER_VERSION}_linux_amd64.zip"]
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/packer:${_PACKER_VERSION}',
'-t', 'gcr.io/$PROJECT_ID/packer',
'--build-arg', 'PACKER_VERSION=${_PACKER_VERSION}',
'--build-arg', 'PACKER_VERSION_SHA256SUM=${_PACKER_VERSION_SHA256SUM}',
'.']
substitutions:
_PACKER_VERSION: 1.7.8
_PACKER_VERSION_SHA256SUM: 8a94b84542d21b8785847f4cccc8a6da4c7be5e16d4b1a2d0a5f7ec5532faec0
images:
- 'gcr.io/$PROJECT_ID/packer:latest'
- 'gcr.io/$PROJECT_ID/packer:${_PACKER_VERSION}'
tags: ['cloud-builders-community']
BTW, the overall arc of my learning project is:
Packer => VM Image => GCP Artifact Repository => Terraform => GCP VM
I don't know specifics about packer, but in general, for using AR with docs that specify gcr.io:
Anytime you want to use AR as a replacement for GCR, you should choose docker.
You should replace gcr.io/$PROJECT_ID with $REGION-docker.pkg.dev/$PROJECT_ID/$REPOSITORY_ID for anywhere it refers to your project, and leave the gcr.io url as-is for other people's projects (like cloud-builders).
I've set up a Google Cloud Run with continuous deployment to a github, and it redeploys every time there's a push to the main (what I what), but when I go to check the site, it hasn't updated the HTML I've been testing with. I've tested it on my local machine, and it's updating the code when I run the Django server, so I'm guessing it's something with my cloudbuild.yml? There was another post I tried to mimic, but it didn't take.
Any advice would be very helpful! Thank you!
cloudbuild.yml:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/${PROJECT_ID}/exeplore', './ExePlore']
# Push the image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/${PROJECT_ID}/exeplore']
# Deploy image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- 'exeplore'
- '--image'
- 'gcr.io/${PROJECT_ID}/exeplore'
- '--region'
- 'europe-west2'
- '--platform'
- 'managed'
images:
- gcr.io/${PROJECT_ID}/exeplore
Here are the variables for GCR
Edit 1: I've now updated my cloudbuild, so the SHORT_SHA is all gone, but now google cloud run is saying it can't find my manage.py at /Exeplore/manage.py. I might have to trial and error it, as running the container locally is fine, and same with running the server locally. I have yet to try what Ezekias suggested, as I've tried rolled back to when it was correctly running the server and it doesn't like that.
Edit 2: I've checked the services, it is at 100% Latest
Check your Cloud Run service, either on the Cloud Console or by running gcloud run services describe. It may be set to serve traffic to a specific revision instead of having 100% of traffic serving LATEST.
If that's the case, it won't automatically move traffic to the new revision when you deploy. If you want it to automatically switch to the new update, you can run gcloud run services update-traffic --to-latest or use the "Manage Traffic" button on the revisions tab of the Cloud Console to set 100% of traffic to the latest healthy revision.
It looks like you're building gcr.io/${PROJECT_ID}/exeplore:$SHORT_SHA, but pushing and deploying gcr.io/${PROJECT_ID}/exeplore. These are essentially different images.
Update any image variables to include the SHORT_SHA to ensure all references are the same.
To avoid duplication you may also want to use dynamic substitution variables
Currently I'm using Cloud Build to produce some artifacts that I need to deploy to GCE instance. I've tried to use gcloud builder for this purpose with the following args:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['compute', 'scp', '--zone=<zone_id>', '<local_path>', '<google compute engine instance name>:<instance_path>']
and build fails with the following error:
ERROR: (gcloud.compute.scp) Could not SSH into the instance. It is
possible that your SSH key has not propagated to the instance yet. Try
running this command again. If you still cannot connect, verify that
the firewall and instance are set to accept ssh traffic.
I've already opened port 22 on my instance but that haven't helped me.
Could you guys help me to solve this problem?
What points I need to check/fix in my build definition?
May be you can give me an advice which builder instead of gcloud I can use to deliver my data from Cloud Build container to the GCE instance?
A few things to try:
1.Make sure you can ssh normally this way.
Troubleshooting SSH if step one fails.
2.Try to change the SSH target from 'instancename' to 'username#instance' in order to indicate the name of the user inside the VM, eg
username#InstanceName
You must find a way to generate and locace the SSH Key Files for the builder to connect to the GCE Instance:
google_compute
google_compute.pub
google_compute_known_hosts
They are identical to the ones you use to directly connect to the instance from your Cloud Shell or from your Local Computer, but this time the connection has to be done by the builder it self.
Make that files interactively like explained in SSH Key Generation to the identity path of builder (test it by cd ~ && pwd, usually: /builder/home/.ssh).
After a connection has been made then copy these files to Google Cloud Storage via gsutil. This step is need to be done one time only.
steps:
- name: 'gcr.io/cloud-builders/gsutil'
args: ['cp', '-rP', '${_BUIKDER_HOME}', 'gs://${_BUCKET_NAME}/builder/']
substitutions:
_BUCKET_NAME: <bucket_name>
_BUIKDER_HOME: <builder_home>
timeout: "60s"
You might take those key files to your workspace. If you prefer as it then they will need to be remain stay in the storage.
The purpose of this placement is that they will be used to reconnect to the instance because each time the builder is started it will be configured back to the default stage so the files will no more exist.
Once the key files are ready, then you can do the scp transfer like below:
steps:
- name: 'gcr.io/cloud-builders/gsutil'
args: ['cp', '-rP', 'gs://${_BUCKET_NAME}/builder/.ssh'], '_${_BUILDER_HOME}']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['compute', 'scp', '--recurse', '--zone', '${_ZONE}', '${_LOCAL_PATH}', '${_USER_NAME}#${_INSTANCE_NAME}:${INSTANCE_PATH}']
substitutions:
_ZONE: <zone>
_USER_NAME: <user_name>
_LOCAL_PATH: <local_path>
_BUCKET_NAME: <bucket_name>
_BUILDER_HOME: : <builder_home>
_INSTANCE_NAME: <instance_name>
_INSTANCE_PATH: <instance_path>
timeout: "60s"
Note: Use the flag of '--recurse' to copy a directory or none to copy a file only.
I am using Ansible to deploy to Amazon EC2, and I have ec2.py and ec2.ini set up such that I can retrieve a list of servers from Amazon. I have my server at AWS tagged rvmdocker:production, and ansible all --list returns my tag as ec2_tag_rvmdocker_production. I can also run:
ansible -m ping tag_rvmdocker_production`
and it works. But if I have that tag in a static inventory file, and run:
ansible all -m ping -i production
it returns:
tag_rvmdocker_production | UNREACHABLE! => {
"changed": false,
"msg": "ERROR! SSH encountered an unknown error during the connection. Werecommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue",
"unreachable": true
}
Here is my production inventory file:
[dockerservers]
tag_rvmdocker_production
It looks like Ansible can't resolve tag_rvmdocker_production when it's in the static inventory file.
UPDATE
I followed ydaetskcoR's advice and am now getting a new error message:
$ ansible-playbook -i production app.yml
ERROR! ERROR! production:2: Section [dockerservers:children] includes undefined group: tag_rvmdocker_production
But I know the tag exists, and it seems like Ansible and ec2.py know it:
$ ansible tag_rvmdocker_production --list
hosts (1):
12.34.56.78
Here is my production inventory:
[dockerservers:children]
tag_rvmdocker_production
And my app.yml playbook file:
---
- name: Deploy RVM app to production
hosts: dockerservers
remote_user: ec2-user
become: true
roles:
- ec2
- myapp
In the end, I'd love to be able to run the same playbook against development (a VM on my Mac), staging, or production, to start an environment. My thought was to have static inventory files that pointed to tags or groups on EC2. Am I even approaching this the right way?
I had a similar issue to this, and resolved it as follows.
First, I created a folder to contain my inventory files, and put in there a symlink to my /etc/ec2.ini, a copy (or symlink) to the ec2.py script (with executable status), and a hosts file as follows.
$ ls amg-dev/*
amg-dev/ec2.ini -> /etc/ec2.ini
amg-dev/ec2.py
amg-dev/hosts
My EC2 instances are tagged with a Type = amg_dev_web
The hosts file contains the following information - the blank first entry is important here.
[tag_Type_amg_dev_web]
[webservers:children]
tag_Type_amg_dev_web
[all:children]
webservers
Then when I run ansible-playbook I specify the name of the folder only as the inventory which makes Ansible read the hosts file, and execute the ec2.py script to interrogate AWS.
ansible-playbook -i amg-dev/ playbook.yml
Inside my playbook, I refer to these as webservers as follows
- name: WEB | Install and configure relevant packages
hosts: webservers
roles:
- common
- web
Which seems to work as expected.
As discussed in the comments, it looks like you've misunderstood the use of tags in a dynamic inventory.
The AWS EC2 dynamic inventory script allows you to target groups of servers by a tag key/value combination. So to target your web servers you may have a tag called Role that in this case is set to web which you would then target as a dynamic group with tag_Role_web.
You can also have static groups that contain children dynamic groups. This is much the same as how you use groups of groups normally in an inventory file that might be used like this:
[web-servers:children]
front-end-web-servers
php-web-servers
[front-end-web-servers]
www-web-1
www-web-2
[php-web-servers]
php-web-1
php-web-2
Which would allow you to generically target or set group variables for all of the web servers above simply by using the more generic web-servers group and then specifically configure the types of web servers using the more specific groups of either front-end-web-servers or php-web-servers.
However, if you put an entry under a group where it isn't defined as a child group then Ansible will assume that this is a host and will then attempt to connect to that host directly.
If you have a uniquely tagged instance that you are trying to reach via dynamic inventory then you simply use it as if it was a group (it just happens to currently only have one instance in it).
So if you want to target or set variables for the dockerservers group which then includes an instance that is tagged with the key-pair combination of rvmdocker: production then you would just do this:
[dockerservers:children]
tag_rvmdocker_production
[tag_rvmdocker_production]
How can I launch(purchase) a reserved EC2 instance using Ansible with EC2 module? I've googled using words something like 'ec2 reserved instance ansible' but no joy.
Or should I use AWS CLI instead?
Or you can create Ansible module.
Also there are already created modules that you can use as examples ansible-modules-extras/cloud/amazon.
PS:
Modules can be written in any language and are found in the path
specified by ANSIBLE_LIBRARY or the --module-path command line option.
By default, everything that ships with ansible is pulled from its
source tree, but additional paths can be added.
The directory ”./library”, alongside your top level playbooks, is also
automatically added as a search directory.
I just made a PR which might help you.
You could use it as follows:
- name: Purchase reserved instances
boto3:
name: ec2
region: us-east-1
operation: purchase_reserved_instances_offering
parameters:
ReservedInstancesOfferingId: 9a06095a-bdc6-47fe-a94a-2a382f016040
InstanceCount: 3
LimitPrice:
Amount: 123.0
CurrencyCode: USD
register: result
- debug: var=result
If you're interrested by this feature, feel free to vote up on the PR. :)
I looked into the Cloud module list and found there isn't any modules out of the box that supports reserved instance - I think you try building a wrapper over the AWS CLI or Python Boto SDK [ or any SDK ].
This is the pseudo code for the playbook :
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: 'Calling Python Code to reserve instance'
raw: python reserve-ec2-instance.py args