I am trying to run an AWS Lambda project locally on Ubuntu. When I run the project with AWS SAM Local it shows me this error: Error: Running AWS SAM projects locally requires Docker. Have you got it installed?
I had trouble installing it on Fedora.
When I followed the Docker postinstall instructions I managed to get past this issue.
https://docs.docker.com/install/linux/linux-postinstall/
I had to:
Delete the ~/.docker directory;
Create the "docker" group;
Add my user to the "docker" group;
Logout and back in again;
Restart the "docker" daemon.
I was then able to run the command:
sam local start-api
If you want to run local sam-cli, you have first install docker from docker official website then run sudo sam local start-api. Note that sudo is necessary for running local developer with needed privileges.
This error mostly arises due to lack of admin privilege to use docker. Just add sudo to your command. This will work.
eg: sudo sam local start-api --region eu-west-3
We are working on Mac and were seeing same message when using an older version of Docker (1.12.6). Have since updated to a newer (but not latest) version 17.12.0-ce-mac49 and it is now fine.
Another cause for this is this recent issue within Docker for Mac.
A quick workaround, as specified in the issue itself, is to run SAM with:
$ DOCKER_HOST=unix://$HOME/.docker/run/docker.sock sam local start-api
You don't need to run SAM as root.
I am using colima for docker on mac with intel chip. and faced this error. was able to resolve it by adding DOCKER_HOST in .zshrc file
vi ~/.zshrc
paste export DOCKER_HOST="unix://$HOME/.colima/docker.sock" in the .zshrc file
escape :wq
Related
I have Docker up and running on my Mac. But "sam local invoke" command results in Error: Running AWS SAM projects locally requires Docker. Have you got it installed and running?
Anyone knows what could be the reason?
There was a change Docker Desktop's default context settings.
On Mac, it may default to desktop-linux if symlink to /var/run/docker.sock was not created. This seems to cause problem with SAM CLI (and probably, a lot of other apps) that expects to communicate with Docker using this socket.
The easiest way to fix this is to set DOCKER_HOST environment variable to point to the socket file associated with desktop-linux.
Run export DOCKER_HOST="unix://Users/<username>/.docker/run/docker.sock"
(add this to your .zshrc or .bashrc file so that you don't have to define the variable every time you start a shell)
and then try running sam command.
I'm trying to create a Docker context that will automatically integrate with AWS's ECS.
I'm following this tutorial
The author just does:
docker context create ecs myecs and gets a "pick an integration" prompt, whereas I get an error saying it needs exactly 1 argument.
docker context create" requires exactly 1 argument.
See 'docker context create --help'.
Usage: docker context create [OPTIONS] CONTEXT
Create a context
You need to install the Docker Compose CLI preview
The below curl is from here: Docker docs
curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh
sudo docker context create ecs myecs
It didn't work without sudo for me for some reason.
After the script finished I had some weird errors:
cp: cannot stat '/tmp/tmp.d4QjhW8T6k/docker-compose': No such file or directory and docker context create ecs myecs didn't work at first, but once I tried with sudo it worked fine.
EDIT: . ~/.zshrc (or just close your terminal and open a new one) made it possible for me to run docker context create ecs myecs without sudo.
Author of the blog/tutorial here. It looks like you don't have the pre-requsite installed. In the blog I call out the pre-req in pieces like this.
....In July, Docker released a beta for Docker Desktop that embedded these functionalities and, on September 15th, Docker released an updated experience in their Docker Desktop stable channel....
and then
...For now the only thing you need is Docker Desktop and an AWS account. For this test , I am using Docker Desktop (stable) version 2.5.0.1....
and finally
The core of this integration is built around a new tool dubbed Compose CLI (this is not to be confused with the original docker-compose CLI). This new CLI surfaces to the user as new functionalities in the docker command. While in Docker Desktop all this plumbing is completely hidden and available out of the box, if you are using a Linux machine you can set it up using either a script or a manual install. This new CLI is, essentially, a new version of the docker binary.
Eager to understand more how we could make it more clear / front and center that there were stuff to install and/or minimum software versions you had to use.
Thanks for trying it out!
If you're on Linux and you're running the docker context create ecs myecscontext command from the docs then try enabling experimental features in docker:
Edit /etc/docker/daemon.json
Set contents to
{
"experimental": true
}
Restart docker service sudo systemctl restart docker
Exit your terminal and open a new one so that the changes take effect.
Source1
Source2
I had same issue but after installing Docker Desktop version problem resolved.
Server side version doesn't have such kind of functionality.
I tried to install localstack to test my lambda function, which uses dynamodb, sqs services.
To test this lambda function I installed localstack and followed the steps, which they gave in Readme.md file.
But when I tried to start localstack service by running "localstack start" command, it ends with below exception.
I have used "pip install localstack" command for installation
localstack exception while running it
Exception:
sfanish#fanish-PC MINGW64 /c/Python27/Scripts
$ localstack start
$ localstack start 2017-12-27T14:37:50:INFO:localstack.services.install:
Downloading and installing local Elasticsearch server.
This may take some time. Starting local dev environment. CTRL-C to quit.
ERROR: 'mkdir -p C:\python27\lib\site-packages\localstack/infra': The syntax of the command is incorrect.
I am using windows10 64-bit machine, Python2.7. Any help will be great :)
The error is due to the fact that the path contains both '/' and '\'. Windows even with MINGW64 is not well supported by localstack.
Solutions:
Try the last version of localstack/localstack.
Fix the issue in localstack code.
Use a virtual environment (Virtual Box, Hyper-V, Docker) that support fully the GNU/Linux syntax.
I'm integrating AWS Auto Scaling Group with Code Deploy.
I wrote a bash script for AfterInstall hook.
The script executes composer update, composer dump-autoload since my code is using PHP.
And here is the problem.
When I deploy, deployment fails with this log.
[RuntimeException]
The HOME or COMPOSER_HOME environment variable must be set for composer to run correctly
But when I get to instance via SSH and run composer it works fine.
How do I fix this? Anyone had worked around this issue?
Any answer will be appreciated. Thank you for your time.
I had a similar problem using Elastic Beanstalk and i did fixed it adding an Environment variable
You should be able to achieve this in CodeDeploy too for example on creating the application.
See also https://github.com/composer/composer/issues/4789
Could you make sure the env variable is also accessible by the user you specify in the appspec file to which runs the hook script? If you have multiple user running on the instance, env variable might not be accessible to every user depends how you set it up.
I have the same concern regarding composer install using CodeDeploy. It runs well in develop but when I ran it in production, I'm getting:
[stderr] [RuntimeException]
[stderr] The HOME or COMPOSER_HOME environment variable must be set for composer to run correctly
I SSH to instance and run composer and I get:
user#server:~/httpdocs$ /opt/plesk/php/7.2/bin/php /usr/lib/plesk-9.0/composer.phar install
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Nothing to install or update
Generating optimized autoload files
user#server:~/httpdocs$
I have one ec2 instance and I deploy in 2 separate directories for stg and prod.
codedeploy deployment error
I have a provisioning setup with vagrant and puppet that works well locally and I'm now tryign to move it to AWS using vagrant-aws.
As I understand it I can make use the AWS user-data field in vagrant as specified to run commands on the first boot of a new vm like so:
aws.user_data = File.read("user_data.txt")
Where user_data.txt contains:
#!/bin/bash
sudo apt-get install -y puppet-common
Then my existing puppet provisioning scripts should be able to run. However this errors out on the vagrant up command with:
[aws] Running provisioner: puppet...
The `puppet` binary appears to not be in the PATH of the guest. This
could be because the PATH is not properly setup or perhaps Puppet is not
installed on this guest. Puppet provisioning can not continue without
Puppet properly installed.
But when I ssh into the machine I see that the user-data did get parsed and puppet is installed successfully. Is the puppet provisioner running before the user-data install puppet maybe? Or is there some better way to install puppet on a vm before trying to provision?
It is broken, but there's a workaround if you're using Ubuntu which is far simpler than building your own AMI.
Add the following line to your config:
aws.user_data = "#cloud-config\nbootcmd:\n - echo 'manual' > /etc/init/ssh.override\npackages:\n - puppet\nruncmd:\n - [ 'rm', '/etc/init/ssh.override' ]\n - [ 'service', 'ssh', 'start' ]\n"
This tells Cloudinit to disable SSH startup early in the boot process and re-enable it once your packages are installed. Now Vagrant can only SSH in to run puppet once the packages are fully installed.
This will probably work for other distros that use Cloudinit aside from Ubuntu, altho it is Upstart specific so the commands may need tweaking.
Well I worked around this by building my own AMI with puppet and other things I need installed, still seems like vagrant-aws is broken or I'm misunderstanding something else here.