I am trying to run git in AWS lambda to make a checkout of a repository.
This is my setup:
I am using nodejs 4.3
I am not using nodegit because I want to use the "--depth=1" parameter, which is not supported by nodegit.
I have copied the git and ssh executable from the correct AWS AMI and placed then in a "bin" folder in the zip I upload.
I added them to PATH with this:
->
process.env['PATH'] = process.env['LAMBDA_TASK_ROOT'] + "/bin:" + process.env['PATH'];
The input variables are set like this:
"checkout_url": "git#...",
"branch":"master
Now I do this (for brevity, I mixed some pseudo-code in):
downloadDeploymentKeyFromS3Sync('/tmp/ssh_key');
fs.chmodSync("/tmp/ssh_key",0600);
process.env['GIT_SSH_COMMAND'] = 'ssh -o StrictHostKeyChecking=no -i /tmp/ssh_key';
execSync("git clone --depth=1 " + checkout_url + " --branch " + branch + " /tmp/checkout");
Running this in my local computer using lambda-local everything works fine! But when I test it in lambda, I get:
warning: templates not found /usr/share/git-core/templates
PRIV_END: seteuid: Operation not permitted\r
fatal: Could not read from remote repository.
The "warning" is of course, because I did not install git but just copied the binary. Is that a reason why this should not work?
Why is git needing "setuid"? I read that in some shells, that is disabled for security reasons. So it makes sense that it does not work in lambda. Can git somehow be instructed to not "need" this command?
Yep, this is definitely possible, I've created a Lambda Layer that achieves just this. No need to mess with any env variables, should work out of the box:
https://github.com/lambci/git-lambda-layer
As stated in the README, all you need to do is add a layer with the following ARN:
arn:aws:lambda:<region>:553035198032:layer:git:<version>
(replace <region> and <version>, check README for latest version)
The issue is that you cannot copy just the git binary. You need a portable version of git and even with that you're going to have a bad time because you cannot guarantee that the os the lambda function runs on is going to be compatible with the binary.
Stepping back, I would just walk away from this approach completely. I would clone and build a package that I would just download pretty much the same way you do downloadDeploymentKeyFromS3Sync.
You might consider this a non-answer, but I've found the easiest way to run arbitrary binaries from Lambda is... not to. If I cannot do the work from within a platform-independent, non-binary approach, I integrate Docker into the workflow, managing Docker containers from the Lambda function.
On AWS one way to do this is to use the Elastic Container Service (ECS) to spawn a task that runs git.
If you stand up a Docker Swarm instance or integrate another Docker-API compatible service such as Rackspace Carina or Joyent's Triton, then you could use a project I personally put together specifically for integrating AWS Lambda with Docker: "Dockaless".
Good luck!
Related
Is there any enviroment where handler.js is running? And if so what if somehow run sudo rm -rf ~/ in AWS lambda?
How do think what will happen?
You can think of a Lambda function as a managed (short-lived) docker container (although Micro-VM would be more correct, as we learned at re:Invent 2018). You define the compute and RAM resources your "container" has to run a function.
As the documentation states, you get the following environment:
The underlying AWS Lambda execution environment includes the following
software and libraries.
Operating system – Amazon Linux
AMI – amzn-ami-hvm-2017.03.1.20170812-x86_64-gp2
Linux kernel – 4.14.77-70.59.amzn1.x86_64
AWS SDK for JavaScript – 2.290.0
SDK for Python (Boto 3) – 3-1.7.74 botocore-1.10.74
Furthermore you're provided with some temporary storage (at the moment 500MB) at /tmp/.
AWS tries to re-run the handler function for each Lambda-Invocation (see here for more details), if there is already a "container" running, so I'd imagine you could break your own container - although it apparently doesn't have sudo privileges, so there's limited impact that you can have with your sudo rm -rf.
I am migrating a Django application from Openshift v2 to v3 (In case you don't know, RedHat is shutting down v2 on September 30th, see: https://blog.openshift.com/migrate-to-v3-v2-eol/)
So, I am following this blog post to help me: https://blog.openshift.com/migrating-django-applications-openshift-3/ . I am new to all these Docker / Kubernetes concepts the new version is build upon.
I was able to make some progress : I managed to get a successful build of my app. Yet it crashes at deployment time:
---> Running application from script (app.sh) ...
/usr/libexec/s2i/run: line 42: /opt/app-root/src/app.sh: Permission denied
Indeed, app.sh has lost its x permission. I log into the failing container as debug and see it:
> oc debug dc/<my app>
> (app-root)sh-4.2$ ls -l /opt/app-root/src/app.sh
-rw-rw-r--. 1 default root 127 Sep 6 21:20 /opt/app-root/src/app.sh
The blog posts states "Ensure that the app.sh file is executable by running chmod +x app.sh.", which I did on my local repo. Whatever, I want to do it again directly in the pod, but it doesn't work:
(app-root)sh-4.2$ chmod +x /opt/app-root/src/app.sh
chmod: changing permissions of ‘/opt/app-root/src/app.sh’: Operation not permitted
So, how can I set the x permission to app.sh ? Thank you
Without looking into more details, any S2I builder image will gladly use your custom supplied run script to start the application in an alternative way.
Create .s2i/bin/ (mind the dot) in your source code directory, place the run script into it and rebuild the app in OpenShift - it will automatically use your custom run script upon deployment.
This is the preferred way of starting applications using custom commands in OpenShift.
Regarding your immediate problem, there is a very simple reason why you can not change the permissions of the script: you were trying to modify the permissions in the deployed pod, and not the builder pod. Deployed pods run using different UIDs, usually somewhere in the range of 100000000, and definitely do not match the file ownership as generated by the build. Hence permission denied.
The root cause of your problem (app.sh losing executable permissions) must be in the way the build process installs those files, and indeed looking at the /usr/libexec/s2i/assemble script in the base image does seem to reveal the culprit. The last two lines are:
# set permissions for any installed artifacts
fix-permissions /opt/app-root
If you wanted to change this part of the build instead of using a custom run script, I suggest you then create .s2i/bin/assemble in your project's source code and make it look sort of like this:
#!/bin/bash
echo "Running stock build:"
${STI_SCRIPTS_PATH}/assemble
echo "Fixing the mess:"
chmod 755 /opt/app-root/src/app.sh
This will fix whatever the stock build process does to file permissions, and will do it using the same UID as the rest of the build, so file ownership shouldn't be an issue.
as I stumbled upon this issue myself I've found a way to resolve it.
You have to make your file app.sh executable and push it in your repo as such.
If git does not track this modification as it did for me, you have to use: git update-index --chmod=+x app.sh for it to work.
Currently on deployment I get:
Hook /opt/elasticbeanstalk/hooks/preinit/30directories.sh failed
I want to remove the hook entirely using .ebextensions, I am currently using:
/.ebextensions/01-remove-unused.config
commands:
removeunused:
command: "rm -f /opt/elasticbeanstalk/hooks/preinit/30directories.sh"
ignoreErrors: true
files:
"/opt/elasticbeanstalk/hooks/preinit/30directories.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
ls
I'm not sure how relevant this is, but what platform of ElasticBeanstalk are you using? For 64bit Amazon Linux 2016.09 v2.3.0 running Docker 1.11.2 specifically (and maybe other platforms), I don't believe that there is any way to do this the way that you are describing.
Unfortunately, the preinit scripts are executed well before ElasticBeanstalk will inject .ebextensions into your environment, and they are only run when a fresh instance is started. To confirm this, you can inspect /var/log/eb-activity.log on a freshly deployed ElasticBeanstalk instance, which shows you everything related to the bootstrapping process that AWS logs for you. Search for Initialization/PreInitStage0/PreInitHook in this log file, and then also search for .ebextensions; you will see that the preinit scripts indeed get executed before most everything else, and .ebextensions files come much later. (for what it's worth, this blog post might help further help understand which hooks get run at which times)
What you could potentially do is configure an .ebextensions script to execute before all other non-preinit hooks scripts that will re-execute (and potentially undo changes from) all of the preinit scripts. However, I would guess that this would be more trouble than it is worth, as there are likely unintended side effects that could come from this.
At any rate, these are my findings trying to do something similar. Hopefully, this helps (despite the fact that I haven't technically solved your problem)!
I am stuck at using Amazon EC2 CLI.
I have downloaded the Command Line Tools from
http://aws.amazon.com/developertools/351.
I placed the bin and lib folder into my Amazon project folder: /Users/Invictus/EC2
I downloaded the cert-xxxx.pem and pk-xxx.pem into the same folder.
Created a .bash_profile in the same folder.
I tried to execute ec2-describe-images -o amazon after I moved to cd /Users/Invictus/EC2.
The system does not recognise the command: command not found.
If I try to execute the same command inside the bin folder, the result is the same.
My .bash_profile:
export EC2_HOME=~/.EC2
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY=`ls $EC2_HOME/pk-*.pem`
export EC2_CERT=`ls $EC2_HOME/cert-*.pem`
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Home/
Where did I make a mistake?
My aim is to connect to the launched instance and be able to execute commands there from my local machine.
I have Java installed.
The newer AWS Unified CLI Tools is much, much easier to set up. All you need is Python, which comes built-in to every Mac.
Here are a few things I can think of:
Your .bash_profile should be in /Users/Invictus/ , not /Users/Invictus/EC2. Move it to your home directory and log off and log back in (or restart your machine) and see if it picks up the right path.
Instead of ec2-describe-images, can you run it as "./ec2-describe-images" - does that work? If not, can you check the permissions on that script?
So I'm having some adventures with the vagrant-aws plugin, and I'm now stuck on the issue of syncing folders. This is necessary to provision the machines, which is the ultimate goal. However, running vagrant provision on my machine yields
[root#vagrant-puppet-minimal vagrant]# vagrant provision
[default] Rsyncing folder: /home/vagrant/ => /vagrant
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p '/vagrant'
I'm almost positive the error is caused because ssh-ing manually and running that command yields 'permission denied' (obviously, a non-root user is trying to make a directory in the root directory). I tried ssh-ing as root but it seems like bad practice. (and amazon doesn't like it) How can I change the folder to be rsynced with vagrant-aws? I can't seem to find the setting for that. Thanks!
Most likely you are running into the known vagrant-aws issue #72: Failing with EC2 Amazon Linux Images.
Edit 3 (Feb 2014): Vagrant 1.4.0 (released Dec 2013) and later versions now support the boolean configuration parameter config.ssh.pty. Set the parameter to true to force Vagrant to use a PTY for provisioning. Vagrant creator Mitchell Hashimoto points out that you must not set config.ssh.pty on the global config, you must set it on the node config directly.
This new setting should fix the problem, and you shouldn't need the workarounds listed below anymore. (But note that I haven't tested it myself yet.) See Vagrant's CHANGELOG for details -- unfortunately the config.ssh.pty option is not yet documented under SSH Settings in the Vagrant docs.
Edit 2: Bad news. It looks as if even a boothook will not be "faster" to run (to update /etc/sudoers.d/ for !requiretty) than Vagrant is trying to rsync. During my testing today I started seeing sporadic "mkdir -p /vagrant" errors again when running vagrant up --no-provision. So we're back to the previous point where the most reliable fix seems to be a custom AMI image that already includes the applied patch to /etc/sudoers.d.
Edit: Looks like I found a more reliable way to fix the problem. Use a boothook to perform the fix. I manually confirmed that a script passed as a boothook is executed before Vagrant's rsync phase starts. So far it has been working reliably for me, and I don't need to create a custom AMI image.
Extra tip: And if you are relying on cloud-config, too, you can create a Mime Multi Part Archive to combine the boothook and the cloud-config. You can get the latest version of the write-mime-multipart helper script from GitHub.
Usage sketch:
$ cd /tmp
$ wget https://raw.github.com/lovelysystems/cloud-init/master/tools/write-mime-multipart
$ chmod +x write-mime-multipart
$ cat boothook.sh
#!/bin/bash
SUDOERS_FILE=/etc/sudoers.d/999-vagrant-cloud-init-requiretty
echo "Defaults:ec2-user !requiretty" > $SUDOERS_FILE
echo "Defaults:root !requiretty" >> $SUDOERS_FILE
chmod 440 $SUDOERS_FILE
$ cat cloud-config
#cloud-config
packages:
- puppet
- git
- python-boto
$ ./write-mime-multipart boothook.sh cloud-config > combined.txt
You can then pass the contents of 'combined.txt' to aws.user_data, for instance via:
aws.user_data = File.read("/tmp/combined.txt")
Sorry for not mentioning this earlier, but I am literally troubleshooting this right now myself. :)
Original answer (see above for a better approach)
TL;DR: The most reliable fix is to "patch" a stock Amazon Linux AMI image, save it and then use the customized AMI image in your Vagrantfile. See below for details.
Background
A potential workaround is described (and linked in the bug report above) at https://github.com/mitchellh/vagrant-aws/pull/70/files. In a nutshell, add the following to your Vagrantfile:
aws.user_data = "#!/bin/bash\necho 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty && chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty\nyum install -y puppet\n"
Most importantly this will configure the OS to not require a tty for user ec2-user, which seems to be the root of the problem. I /think/ that the additional installation of the puppet package is not required for the actual fix (although Vagrant may use Puppet for provisioning the machine later, depending on how you configured Vagrant).
My experience with the described workaround
I have tried this workaround but Vagrant still occasionally fails with the same error. It might be a "race condition" where Vagrant happens to run its rsync phase faster than cloud-init (which is what aws.user_data is passing information to) can prepare the workaround for #72 on the machine for Vagrant. If Vagrant is faster you will see the same error; if cloud-init is faster it works.
What will work (but requires more effort on your side)
What definitely works is to run the command on a stock Amazon Linux AMI image, and then save the modified image (= create an image snapshot) as a custom AMI image of yours.
# Start an EC2 instance with a stock Amazon Linux AMI image and ssh-connect to it
$ sudo su - root
$ echo 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty
$ chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty
# Note: Installing puppet is mentioned in the #72 bug report but I /think/ you do not need it
# to fix the described Vagrant problem.
$ yum install -y puppet
You must then use this custom AMI image in your Vagrantfile instead of the stock Amazon one. The obvious drawback is that you are not using a stock Amazon AMI image anymore -- whether this is a concern for you or not depends on your requirements.
What I tried but didn't work out
For the record: I also tried to pass a cloud-config to aws.user_data that included a bootcmd to set !requiretty in the same way as the embedded shell script above. According to the cloud-init docs bootcmd is run "very early" in the startup cycle for an EC2 instance -- the idea being that bootcmd instructions would be run earlier than Vagrant would try to run its rsync phase. But unfortunately I discovered that the bootcmd feature is not implemented in the outdated cloud-init version of current Amazon's Linux AMIs (e.g. ami-05355a6c has cloud-init 0.5.15-69.amzn1 but bootcmd was only introduced in 0.6.1).