Packer doesn't import project ssh keys (googlecompute) - google-cloud-platform

Packer seems to exclude ssh keys from the project but I have set the block-project-ssh-keys value to false. The final command fails but that user has an ssh key tied to the project.
Any ideas?
{
"builders": [
{
"type": "googlecompute",
"project_id": "mahamed901",
"source_image_family": "ubuntu-1804-lts",
"ssh_username": "packer",
"zone": "europe-west1-b",
"preemptible": "true",
"image_description": "Worker Node for Jenkins (Java + Docker)",
"disk_type": "pd-ssd",
"disk_size": "10",
"metadata": {"block-project-ssh-keys":"false"},
"image_name": "ubuntu1804-jenkins-docker-{{isotime | clean_image_name}}",
"image_family": "ubuntu1804-jenkins-worker"
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"sudo apt update",
"#sudo apt upgrade -y",
"#sudo apt-get install -y git make default-jdk",
"#curl https://get.docker.com/ | sudo bash",
"uptime",
"sudo curl -L \"https://github.com/docker/compose/releases/download/1.23.1/docker-compose-$(uname -s)-$(uname -m)\" -o /usr/local/bin/docker-compose",
"sudo chmod +x /usr/local/bin/docker-compose",
"sleep 5",
"cat /etc/passwd",
"#sudo usermod -aG docker jenkins",
"#sudo docker ps",
"#rm ~/.ssh/authorized_keys"
]
}
]
}

This is controlled by metadata option block-project-ssh-keys true or false.
See this issue
(the format of your metadata is wrong, remove the square brackets [ ].)

Related

Script exited with non-zero exit status: 100.Allowed exit codes are: [0] : packer error [duplicate]

I have a shell provisioner in packer connected to a box with user vagrant
{
"environment_vars": [
"HOME_DIR=/home/vagrant"
],
"expect_disconnect": true,
"scripts": [
"scripts/foo.sh"
],
"type": "shell"
}
where the content of the script is:
whoami
sudo su
whoami
and the output strangely remains:
==> virtualbox-ovf: Provisioning with shell script: scripts/configureProxies.sh
virtualbox-ovf: vagrant
virtualbox-ovf: vagrant
why cant I switch to the root user?
How can I execute statements as root?
Note, I do not want to quote all statements like sudo "statement |foo" but rather globally switch user like demonstrated with sudo su
You should override the execute_command. Example:
"provisioners": [
{
"execute_command": "echo 'vagrant' | {{.Vars}} sudo -S -E sh -eux '{{.Path}}'",
"scripts": [
"scripts/foo.sh"
],
"type": "shell"
}
],
There is another solution with simpler usage of 2 provisioner together.
Packer's shell provisioner can run the bash with sudo privileges. First you need copy your script file from local machine to remote with file provisioner, then run it with shell provisioner.
packer.json
{
"vars": [...],
"builders": [
{
# ...
"ssh_username": "<some_user_other_than_root_with_passwordless_sudo>",
}
],
"provisioners": [
{
"type": "file",
"source": "scripts/foo.sh",
"destination": "~/shell.tmp.sh"
},
{
"type": "shell",
"inline": ["sudo bash ~/shell.tmp.sh"]
}
]
}
foo.sh
# ...
whoami
sudo su root
whoami
# ...
output
<some_user_other_than_root_with_passwordless_sudo>
root
After provisioner complete its task, you can delete the file with shell provisioner.
packer.json updated
{
"type": "shell",
"inline": ["sudo bash ~/shell.tmp.sh", "rm ~/shell.tmp.sh"]
}
one possible answer seems to be:
https://unix.stackexchange.com/questions/70859/why-doesnt-sudo-su-in-a-shell-script-run-the-rest-of-the-script-as-root
sudo su <<HERE
ls /root
whoami
HERE
maybe there is a better answer?
Assuming that the shell provisioner you are using is a bash script, you can add my technique to your script.
function if_not_root_rerun_as_root(){
install_self
if [[ "$(id -u)" -ne 0 ]]; then
run_as_root_keeping_exports "$0" "$#"
exit $?
fi
}
function run_as_root_keeping_exports(){
eval sudo $(for x in $_EXPORTS; do printf '%s=%q ' "$x" "${!x}"; done;) "$#"
}
export EXPORTS="PACKER_BUILDER_TYPE PACKER_BUILD_NAME"
if_not_root_rerun_as_root "$#"
There is a pretty good explanation of "$#" here on StackOverflow.

Installing authorized_keys file under custom user for Ubuntu AWS

I'm trying to setup an ubuntu server and login with a non-default user. I've used cloud-config with the user data to setup an initial user, and packer to provision the server:
system_info:
default_user:
name: my_user
shell: /bin/bash
home: /home/my_user
sudo: ['ALL=(ALL) NOPASSWD:ALL']
Packer logs in and provisions the server as my_user, but when I launch an instance from the AMI, AWS installs the authorized_keys files under /home/ubuntu/.ssh/
Packer config:
{
"variables": {
"aws_profile": ""
},
"builders": [{
"type": "amazon-ebs",
"profile": "{{user `aws_profile`}}",
"region": "eu-west-1",
"instance_type": "c5.large",
"source_ami_filter": {
"most_recent": true,
"owners": ["099720109477"],
"filters": {
"name": "*ubuntu-xenial-16.04-amd64-server-*",
"virtualization-type": "hvm",
"root-device-type": "ebs"
}
},
"ami_name": "my_ami_{{timestamp}}",
"ssh_username": "my_user",
"user_data_file": "cloud-config"
}],
"provisioners": [{
"type": "shell",
"pause_before": "10s",
"inline": [
"echo 'run some commands'"
]}
]
}
Once the server has launched, both ubuntu and my_user users exist in /etc/passwd:
my_user:1000:1002:Ubuntu:/home/my_user:/bin/bash
ubuntu:x:1001:1003:Ubuntu:/home/ubuntu:/bin/bash
At what point does the ubuntu user get created, and is there a way to install the authorized_keys file under /home/my_user/.ssh at launch instead of ubuntu?
To persist the default user when using the AMI to launch new EC2 instances from it you have to change the value is /etc/cloud/cloud.cfg and update this part:
system_info:
default_user:
# Update this!
name: ubuntu
You can add your public keys when you create the user using cloud-init. Here is how you do it.
users:
- name: <username>
groups: [ wheel ]
sudo: [ "ALL=(ALL) NOPASSWD:ALL" ]
shell: /bin/bash
ssh-authorized-keys:
- ssh-rsa AAAAB3Nz<your public key>...
Addding additional SSH user account with cloud-init

packer aws instance builder throwing error manifest has invalid value File does not exist or is not a file?

I am using amazon instance builder for creating image out of AMI. I am passing all parameters correctly. But I dont know which value should I pass in --manifest. I am getting following error.
amazon-instance: --manifest has invalid value
'/tmp/ami-257e6b5c.manifest.xml': File does not exist or is not a
file.
I am using following file for conversion.
{
"variables": {
"aws_access_key": "",
"aws_secret_key": ""
},
"builders": [{
"type": "amazon-instance",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-west-2",
"source_ami": "ami-257e6b5c",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"account_id": "12345678",
"bundle_upload_command": "sudo ec2-upload-bundle -b packer-images -m /tmp/manifest.xml -a access_key -s secret_key -d /tmp --batch --retry",
"s3_bucket": "packer-images",
"x509_cert_path": "server.crt",
"x509_key_path": "server.key",
"x509_upload_path": "/tmp",
"ami_name": "packer-example {{timestamp}}"
}]
}
Don't replace the template, copy it from the docs and modify it.
sudo ec2-upload-bundle \
-b {{.BucketName}} \
-m {{.ManifestPath}} \
-a {{.AccessKey}} \
-s {{.SecretKey}} \
-d {{.BundleDirectory}} \
--batch \
--retry
See bundle_upload_command.
The reason you have to remove --region is because you have an old version of AMI Tools. I recommend that you try to install a newer version from source, see Set Up AMI Tools.

Copy local file to remote AWS EC2 instance with Ansible

I'm attempting to build an AWS AMI using both Packer and Ansible to provision my AMI. I'm getting stuck on being able to copy some local files to my newly spun up EC2 instance using Ansible. I'm using the copy module in Ansible to do this. Here's what my Ansible code looks like:
- name: Testing copy of the local remote file
copy:
src: /tmp/test.test
dest: /tmp
Here's the error I get:
amazon-ebs: TASK [Testing copy of the local remote file] ***********************************
amazon-ebs: fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to find '/tmp/test.test' in expected paths."}
I've verified that the file /tmp/test.test exists on my local machine from which Ansible is running.
For my host file I just have localhost in it since packer is telling Ansible everything it needs to know about where to run Ansible commands.
I'm not sure where to go from here or how to properly debug this error, so I'm hoping for a little help.
Here's what my Packer script looks like:
{
"variables": {
"aws_access_key": "{{env `access_key`}}",
"aws_secret_key": "{{env `secret_key`}}"
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-east-1",
"source_ami": "ami-116d857a",
"instance_type": "t2.micro",
"ssh_username": "admin",
"ami_name": "generic_jenkins_image",
"ami_description": "Testing AMI building with Packer",
"vpc_id": "xxxxxxxx",
"subnet_id": "xxxxxxxx",
"associate_public_ip_address": "true",
"tags": {"Environment" : "Dev", "Product": "SharedOperations"}
}],
"provisioners": [
{
"type": "shell",
"inline": [
"sleep 30",
"sudo rm -f /var/lib/dpkg/lock",
"sudo apt-get update -y --fix-missing",
"sudo apt-get -y install libpq-dev python-dev libxml2-dev libxslt1-dev libldap2-dev libsasl2-dev libffi-dev gcc build-essential python-pip",
"sudo pip install ansible"
]
},
{
"type": "ansible-local",
"playbook_file": "ansible/main.yml"
}
]
}
And here's my entire Ansible file:
---
- hosts: all
sudo: yes
tasks:
- name: Testing copy of the local remote file
copy:
src: /tmp/test.test
dest: /tmp
You are using ansible-local provisioner which runs the playbooks directly on targets ("local" in HashiCorp's products like Vagrant, Packet is used to describe the point of view of the provisioned machine).
The target does not have the /tmp/test.test file, hence you get the error.
You actually want to run the playbook using the regular Ansible provisioner.

Docker Port Forwarding

I am trying to run my django application in a docker container. I am using uWSGI to serve the django application, and also have a celery worker running in the background. These processes are started by supervisord.
The problem that I am having is that I am unable to see the application on the port that I would expect to see it on. I am exposing port 8080 and running the uwsgi process on 8080, but cannot find my application in a browser at the ip address $(boot2docker ip):8080. I just get Google Chrome's 'This webpage is not available'. (I am using a Mac, so I need to get the boot2docker ip address). The container is clearly running, and reports that my uwsgi and celery processes are both successfully running as well.
When I run docker exec CONTAINER_ID curl localhost:8080 I get a response like
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 21 0 21 0 0 3150 0 --:--:-- --:--:-- --:--:-- 3500
... so it seems like the container is accepting connections on port 8080.
When I run docker exec CONTAINER_ID netstat -lpn |grep :8080 I get tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 11/uwsgi
When I run docker inspect CONTAINER_ID I get the following:
[{
"AppArmorProfile": "",
"Args": [
"-c",
"/home/docker/code/supervisor-app.conf"
],
"Config": {
"AttachStderr": true,
"AttachStdin": false,
"AttachStdout": true,
"Cmd": [
"supervisord",
"-c",
"/home/docker/code/supervisor-app.conf"
],
"CpuShares": 0,
"Cpuset": "",
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"env=staging"
],
"ExposedPorts": {
"8080/tcp": {}
},
"Hostname": "21443d8a16df",
"Image": "vitru",
"Memory": 0,
"MemorySwap": 0,
"NetworkDisabled": false,
"OnBuild": null,
"OpenStdin": false,
"PortSpecs": null,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Created": "2014-12-27T01:00:22.390065668Z",
"Driver": "aufs",
"ExecDriver": "native-0.2",
"HostConfig": {
"Binds": null,
"CapAdd": null,
"CapDrop": null,
"ContainerIDFile": "",
"Devices": [],
"Dns": null,
"DnsSearch": null,
"ExtraHosts": null,
"Links": null,
"LxcConf": [],
"NetworkMode": "bridge",
"PortBindings": {},
"Privileged": false,
"PublishAllPorts": false,
"RestartPolicy": {
"MaximumRetryCount": 0,
"Name": ""
},
"SecurityOpt": null,
"VolumesFrom": null
},
"HostnamePath": "/mnt/sda1/var/lib/docker/containers/21443d8a16df8e2911ae66d5d31341728d76ae080e068a5bb1dd48863febb607/hostname",
"HostsPath": "/mnt/sda1/var/lib/docker/containers/21443d8a16df8e2911ae66d5d31341728d76ae080e068a5bb1dd48863febb607/hosts",
"Id": "21443d8a16df8e2911ae66d5d31341728d76ae080e068a5bb1dd48863febb607",
"Image": "de52fbada520519793e348b60b608f7db514eef7fd436df4542710184c1ecb7f",
"MountLabel": "",
"Name": "/suspicious_fermat",
"NetworkSettings": {
"Bridge": "docker0",
"Gateway": "172.17.42.1",
"IPAddress": "172.17.0.87",
"IPPrefixLen": 16,
"MacAddress": "02:42:ac:11:00:57",
"PortMapping": null,
"Ports": {
"8080/tcp": null
}
},
"Path": "supervisord",
"ProcessLabel": "",
"ResolvConfPath": "/mnt/sda1/var/lib/docker/containers/21443d8a16df8e2911ae66d5d31341728d76ae080e068a5bb1dd48863febb607/resolv.conf",
"State": {
"ExitCode": 0,
"FinishedAt": "0001-01-01T00:00:00Z",
"Paused": false,
"Pid": 16230,
"Restarting": false,
"Running": true,
"StartedAt": "2014-12-27T01:00:22.661588847Z"
},
"Volumes": {},
"VolumesRW": {}
}
]
As someone not terribly fluent in Docker, I'm not really sure what all that means. Maybe there is a clue in there as to why I cannot connect to my server?
Here is my Dockerfile, so you can see if I'm doing anything blatantly wrong in there.
FROM ubuntu:14.04
# Get most recent apt-get
RUN apt-get -y update
# Install python and other tools
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
RUN apt-get install -y python3 python3-dev python-distribute
RUN apt-get install -y nginx supervisor
# Get Python3 version of pip
RUN apt-get -y install python3-setuptools
RUN easy_install3 pip
RUN pip install uwsgi
RUN apt-get install -y python-software-properties
# Install GEOS
RUN apt-get -y install binutils libproj-dev gdal-bin
# Install node.js
RUN apt-get install -y nodejs npm
# Install postgresql dependencies
RUN apt-get update && \
apt-get install -y postgresql libpq-dev && \
rm -rf /var/lib/apt/lists
# Install pylibmc dependencies
RUN apt-get update
RUN apt-get install -y libmemcached-dev zlib1g-dev libssl-dev
ADD . /home/docker/code
# Setup config files
RUN ln -s /home/docker/code/supervisor-app.conf /etc/supervisor/conf.d/
# Create virtualenv and run pip install
RUN pip install -r /home/docker/code/vitru/requirements.txt
# Create directory for logs
RUN mkdir -p /var/logs
# Set environment as staging
ENV env staging
EXPOSE 8080
# The supervisor conf file starts uwsgi on port 8080 and starts a celeryd worker
CMD ["supervisord", "-c", "/home/docker/code/supervisor-app.conf"]
I believe the problem you have is that EXPOSE only makes the ports available between containers... not to the host system. See docs here:
https://docs.docker.com/reference/builder/#expose
You need to "publish" the port via the -p flag for docker run command:
https://docs.docker.com/reference/run/#expose-incoming-ports
There is a similar distinction in Fig, if you were using it, between expose and ports directives in the fig.yml file.