AWS Block Devices name doesn't match with CentOS SoftLink - amazon-web-services

On AWS EC2 Block Device is identified as /dev/sda, /dev/sdf and /dev/sdg, but inside EC2 CentOS instance when I do ll /dev/sd* it gives following:
lrwxrwxrwx. 1 root root 4 Feb 17 03:10 /dev/sda -> xvde
lrwxrwxrwx. 1 root root 4 Feb 17 03:10 /dev/sdj -> xvdj
lrwxrwxrwx. 1 root root 4 Feb 17 03:10 /dev/sdk -> xvdk
lrwxrwxrwx. 1 root root 5 Feb 17 03:10 /dev/sdk1 -> xvdk1
When I run ec2-describe-instances --aws-access-key xxxxxx<MyKey>xxx --aws-secret-key xxxxxx<MyKey>xxx --region us-east-1 ``curl -s http://169.254.169.254/latest/meta-data/instance-id`` | grep -i BLOCKDEVICE output is as follow:
/dev/sda
/dev/sdf
/dev/sdg
I am wondering how to link these two: AWS GUI Console's Block Devices and within EC2 instance Block Devices?
Thanks,

This is a device mapping alias problem. You can see more details with a solution here:
https://forums.aws.amazon.com/message.jspa?messageID=255240
Make sure you take backups of everything before making any changes!

Related

AWS CodeBuild not pausing on breakpoint

Using steps provided here, I kicked off a CodeBuild with the following advanced options checked:
Enable session connection
Allow AWS CodeBuild to modify this service role so it can be used with this build project
The buildspec included a codebuild-breakpoint:
version: 0.2
phases:
pre_build:
commands:
- ls -al
- codebuild-breakpoint
- cd "${SERVICE_NAME}"
- ls -al
- $(aws ecr get-login)
- TAG="$SERVICE_NAME"
build:
commands:
- docker build --tag "${REPOSITORY_URI}:${TAG}" .
post_build:
commands:
- docker push "${REPOSITORY_URI}:${TAG}"
- printf '{"tag":"%s"}' $TAG > ../build.json
artifacts:
files: build.json
The build started and produced the following logs without pausing:
[Container] 2022/02/28 13:49:03 Entering phase PRE_BUILD
[Container] 2022/02/28 13:49:03 Running command ls -al
total 148
drwxr-xr-x 2 root root 4096 Feb 28 13:49 .
drwxr-xr-x 3 root root 4096 Feb 28 13:49 ..
-rw-rw-rw- 1 root root 1818 Feb 28 10:54 user-manager\Dockerfile
-rw-rw-rw- 1 root root 140 Feb 28 10:34 user-manager\body.json
-rw-rw-rw- 1 root root 0 Feb 28 10:54 user-manager\shared-modules\
-rw-rw-rw- 1 root root 4822 Feb 21 14:52 user-manager\shared-modules\config-helper\config.js
-rw-rw-rw- 1 root root 2125 Feb 21 14:52 user-manager\shared-modules\config-helper\config\default.json
-rw-rw-rw- 1 root root 366 Feb 21 14:52 user-manager\shared-modules\config-helper\package.json
-rw-rw-rw- 1 root root 9713 Feb 21 14:52 user-manager\shared-modules\dynamodb-helper\dynamodb-helper.js
-rw-rw-rw- 1 root root 399 Feb 21 14:52 user-manager\shared-modules\dynamodb-helper\package.json
-rw-rw-rw- 1 root root 451 Feb 21 14:52 user-manager\shared-modules\token-manager\package.json
-rw-rw-rw- 1 root root 13885 Feb 21 14:52 user-manager\shared-modules\token-manager\token-manager.js
-rw-rw-rw- 1 root root 44372 Feb 28 10:34 user-manager\src\cognito-user.js
-rw-rw-rw- 1 root root 706 Feb 28 10:34 user-manager\src\package.json
-rw-rw-rw- 1 root root 32734 Feb 28 10:34 user-manager\src\server.js
[Container] 2022/02/28 13:49:03 Running command codebuild-breakpoint
2022/02/28 13:49:03 Build is paused temporarily and you can use codebuild-resume command in the session to resume this build
[Container] 2022/02/28 13:49:03 Running command cd "${SERVICE_NAME}"
/codebuild/output/tmp/script.sh: 4: cd: can't cd to user-manager
My primary question is: Why didn't the build pause and session manager link become available?
Side-quest: The reason I'm trying to debug the session is to try to determine why the process can't CD to the user-manager folder (which clearly exists). Any ideas why?
TLDR: The image on the build machine was too old.
Main quest
The template specified aws/codebuild/ubuntu-base:14.04 as the CodeBuild image. Presumably that image pre-dated the Session Manager functionality (which requires a specific version of the SSM agent to be installed).
I update the agent to aws/codebuild/standard:5.0 and was able to successfully pause on the breakpoint and connect to the session.
Side quest
Once I connected I was able to investigate the cause of the inability to CD to the folder. I can confirm that Tim's shot in the dark was correct! All the entries were in fact files - no folders.
This QuickStart is the gift that keeps on giving! When/if I get all the issues resolved I'll submit a PR to update the project. Those interested in the cause of the file/folder issue can follow up there.
Side quest update
The strange flattening behaviour was due to creating the zip file on a Windows machine and unzipping it on a unix machine (the build agent uses an Ubuntu image). Just zipped it using 7-Zip and that did the job.

docker socker at /var/run/docker.sock with AWS

I have an issue where a few tools, Portainer for example, can't find the docker socket on AWS.
I have some setup scripts that were run to set various containers.
On MacOS, it works without problems.
On a CentOS box, no problem as well.
On CentOS / AWS, containers cannot connect to the docker socket.
I am talking about a local unsecured connection to /var/run/docker.sock
What could be different on AWS?
I can see the socket:
➜ run ls -ld /var/run/docker*
drwxr-xr-x 8 root root 200 Nov 27 14:04 /var/run/docker
-rw-r--r-- 1 root root 4 Nov 27 14:03 /var/run/docker.pid
srw-rw-r-- 1 root docker 0 Nov 27 14:03 /var/run/docker.sock

can not start greengrassd (AWS IOT greengrass) on raspberrypi

I have register AWS IoT Greengrass group.
I also download Greengrass certificate from console and AmazonRoot-CA1
here is list my certificate files(store in /greengrass/certs/):
-rw-r--r-- 1 pi pi 1220 Jan 15 10:07 82ab16xxxx.cert.pem
-rw-r--r-- 1 pi pi 1679 Jan 15 10:07 82ab16xxxx.private.key
-rw-r--r-- 1 pi pi 451 Jan 15 10:07 82ab16xxxx.public.key
-rw-r--r-- 1 pi pi 1188 Jan 15 10:07 root.ca.pem
When I start greengrassd by command:
sudo ./greengrassd start
I have error:
Setting up greengrass daemon
Validating hardlink/softlink protection
Waiting for up to 40s for Daemon to start
Error occured while generating TLS config: ErrUnknownURIScheme: no handlers matched for path: .../greengrass/certs/root.ca.pem
The Greengrass daemon process with [pid = 18029] died
I have try to re-install OS but still error.
I also install mosquitto-clients and mosquitto on raspberrypi
Thanks.
I'm guessing your issue is that you haven't activated your root CA from the console.
try this instead:
sudo wget -O root.ca.pem https://www.amazontrust.com/repository/AmazonRootCA1.pem
try doing this directly in your certs directory then restarting your deamon.

Running VirtualBox in Concourse Task

I'm trying to build vagrant boxes with concourse. I'm using the concourse/buildbox-ci image which is used in concourse's own build pipeline to build the concourse-lite vagrant box.
Before running packer I'm creating the virtualbox devices so they match the hosts devices. Nevertheless the packer build fails with:
==> virtualbox-iso: Error starting VM: VBoxManage error: VBoxManage: error: The virtual machine 'packer-virtualbox-iso-1488205144' has terminated unexpectedly during startup with exit code 1 (0x1)
==> virtualbox-iso: VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component MachineWrap, interface IMachine
Has somebody got this working?
Is the concourse hetzner worker configuration accessible anywhere?
Additional configuration info:
in the concourse job container:
# ls -al /dev/vboxdrv /dev/vboxdrvu /dev/vboxnetctl
crw------- 1 root root 10, 53 Feb 27 14:19 /dev/vboxdrv
crw------- 1 root root 10, 52 Feb 27 14:19 /dev/vboxdrvu
crw------- 1 root root 10, 51 Feb 27 14:19 /dev/vboxnetctl
on the worker host:
# ls -al /dev/vbox*
crw------- 1 root root 10, 53 Feb 24 09:40 /dev/vboxdrv
crw------- 1 root root 10, 52 Feb 24 09:40 /dev/vboxdrvu
crw------- 1 root root 10, 51 Feb 24 09:40 /dev/vboxnetctl
concourse job:
jobs:
- name: mpf
serial_groups: [build]
plan:
- get: vagrant
trigger: true
- get: version
resource: version-mpf
- task: build
privileged: true
file: vagrant/ci/tasks/build.yml
tags: [vm-builder]
params:
TEMPLATE_FILE: virtualbox-mpf.json
vagrant/ci/scripts/build.sh:
#!/bin/bash -ex
mknod -m 0600 /dev/vboxdrv c 10 53
mknod -m 0600 /dev/vboxdrvu c 10 52
mknod -m 0600 /dev/vboxnetctl c 10 51
for name in $(VBoxManage list hostonlyifs | grep '^Name:' | awk '{print $NF}'); do
VBoxManage hostonlyif remove $name
done
VERSION=$(cat version/version)
packer build -var 'version=${VERSION}' vagrant/packer/${TEMPLATE_FILE}
vagrant/ci/tasks/build.yml:
---
platform: linux
image_resource:
type: docker-image
source: {repository: concourse/buildbox-ci}
inputs:
- name: vagrant
- name: version
outputs:
- name: build
run:
path: vagrant/ci/scripts/build.sh
Unfortunately the Hetzner worker configuration is basically just us periodically upgrading VirtualBox and fixing things when it falls over. (edit: we also make sure to use the same OS distro in the host and in the container - in our case, Arch Linux).
Make sure your VirtualBox version matches the version in the container - down to the patch version.
The device IDs (10,53 and 10,52 and 10,51) also must match those found on the host - these vary from version to version of VirtualBox.
We also make sure to use a special backend that does not perform any network namespacing, which is important if you're spinning up VMs that need a host-only network.
This whole thing's tricky. :/

What is the default user that codedeploy runs the hook scripts as?

Background: I am facing this error AWS codedeploy deployment throwing "[stderr] Could not open input file" while trying to invoke a php file from the sh file at afterInstall step
In the afterInstall step, I am trying to run a php file from the afterInstall.sh file and I am getting this error - unable to open php file.
I am not sure what exactly to do. Thought of trying to manually check if I could run the file as that user.
The CodeDeploy agent default user is root.
The directory listing below shows the ownership of the deployed files in their destination folder, /tmp, after a successful deployment.
ubuntu#ip-10-0-xx-xx:~$ ls -l /tmp
total 36
-rw-r--r-- 1 root root 85 Aug 2 05:04 afterInstall.php
-rw-r--r-- 1 root root 78 Aug 2 05:04 afterInstall.sh
-rw-r--r-- 1 root root 1397 Aug 2 05:04 appspec.yml
-rw------- 1 root root 3189 Aug 2 05:07 codedeploy-agent.update.log
drwx------ 2 root root 16384 Aug 2 03:01 lost+found
-rw-r--r-- 1 root root 63 Aug 2 05:04 out.log
runas is an optional filed in the AppSpec file. The user to impersonate when running the script. By default, this is the AWS CodeDeploy agent running on the instance(If you don't specify a non-root user, it will be root).
To run host agent as a non-root user, the environment variable CODEDEPLOY_USER needs to be set, as the link to the host agent source code show. The env variable can be set to whatever user you want the host agent to run as.