Crossbar 0.12.1 : No module named django - wsgi error - django

I get an error while i launch crossbar 0.12.1 that I did not have with the version 0.11
[Controller 210] crossbar.error.invalid_configuration:
WSGI app module 'myproject.wsgi' import failed: No module named django -
Python search path was [u'/myproject', '/opt/crossbar/site-packages/crossbar/worker', '/opt/crossbar/bin', '/opt/crossbar/lib_pypy/extensions', '/opt/crossbar/lib_pypy', '/opt/crossbar/lib-python/2.7', '/opt/crossbar/lib-python/2.7/lib-tk', '/opt/crossbar/lib-python/2.7/plat-linux2', '/opt/crossbar/site-packages']
I have not changed anything else that the crossbar update.
My config.json are still the same, with the pythonpath of my project within the option :
{
"workers": [
{
"type": "router",
"options": {
"pythonpath": ["/myproject"]
},
"realms": [
{
"name": "realm1",
"roles": [
{
"name": "anonymous",
"permissions": [
{
"uri": "*",
"publish": true,
"subscribe": true,
"call": true,
"register": true
}
]
}
]
}
],
"transports": [
{
"type": "web",
"endpoint": {
"type": "tcp",
"port": 80
},
"paths": {
"/": {
"type": "wsgi",
"module": "myproject.wsgi",
"object": "application"
},
etc...
Do you have an idea ?
Thanks.

It seems that "pythonpath": ["/myproject"] replaces other python path configs from your dist-packages. Look for an option that adds /myproject and not replacing current path settings.
Or - add the path to your project to the machine python path, and don't provide crossbar with any python path, so it will pick the exisitng one.
Something like (depends on OS):
$ sudo nano /usr/lib/python2.7/dist-packages/myproject.pth
Then:
/home/username/path/to/myproject

I work with Docker in order to have a clean environment.
The Dockerfile here : http://crossbar.io/docs/Installation-on-Docker/ seem broken :
ImportError: No module named setuptools_ext
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip-build-VfPnRU/pynacl
Storing debug log for failure in /root/.pip/pip.log
The command '/bin/sh -c pip install crossbar[all]' returned a non-zero code: 1
it seem solved with :
RUN pip install --upgrade cffi
Before RUN pip install crossbar[all]
With this Environment, my problem are solved :)
Don't know why i get this error before, but it's work.
Many thanks to all here and to indexerror, the "french python stackoverflow" :)
http://indexerror.net/3380/crossbar-0-12-1-wsgi-error-no-module-named-django?show=3415
P.S.
Here the clean Dockerfile i use :
FROM ubuntu
ENV APPNAME="monappli"
ADD requirements.txt /tmp/
RUN apt-get update
RUN apt-get install -y gcc build-essential python-dev python2.7-dev libxslt1-dev libssl-dev libxml2 libxml2-dev tesseract-ocr python-imaging libffi-dev libreadline-dev libbz2-dev libsqlite3-dev libncurses5-dev python-mysqldb python-pip
RUN cd /tmp/ && pip install -r requirements.txt
RUN pip install -U crossbar[all]
WORKDIR $APPNAME
CMD cd / && cd $APPNAME && python manage.py makemigrations && python manage.py migrate && crossbar start
With Django, flask and/or all the dependencies you want within a file named "requirements.txt" in the same folder than the Docker file :
requirements.txt ex :
ipython
django
djangorestframework
djangorestframework-jwt
django-cors-headers
bottlenose
python-amazon-simple-product-api
python-dateutil
beautifulsoup4
datetime
mechanize
pytesseract
requests

Related

How to use zappa in gitlab CI/CD to deploy app to AWS Lambda?

I am trying to deploy a flask application on aws lambda via zappa through gitlab CI. Since inline editing isn't possible via gitlab CI, I generated the zappa_settings.json file on my remote computer and I am trying to use this to do zappa deploy dev.
My zappa_settings.json file:
{
"dev": {
"app_function": "main.app",
"aws_region": "eu-central-1",
"profile_name": "default",
"project_name": "prices-service-",
"runtime": "python3.7",
"s3_bucket": -MY_BUCKET_NAME-
}
}
My .gitlab-ci.yml file:
image: ubuntu:18.04
stages:
- deploy
before_script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv zappa
deploy_job:
stage: deploy
script:
- mv requirements.txt ~
- mv zappa_settings.json ~
- mkdir ~/forlambda
- cd ~/forlambda
- virtualenv -p python3 venv
- source venv/bin/activate
- pip3.7 install -r ~/requirements.txt -t ~/forlambda/venv/lib/python3.7/site-packages/
- zappa deploy dev
The CI file, upon running, gives me the following error:
Any suggestions are appreciated
zappa_settings.json is commited to the repo and not created on the fly. What is created on the fly is AWS credentials file. Values required are being read from Gitlab env vars set in the web UI of the project.
zappa_settings.json
{
"prod": {
"lambda_handler": "main.handler",
"aws_region": "eu-central-1",
"profile_name": "default",
"project_name": "dummy-name",
"s3_bucket": "dummy-name",
"aws_environment_variables": {
"STAGE": "prod",
"PROJECT": "dummy-name"
}
},
"dev": {
"extends": "prod",
"debug": true,
"aws_environment_variables": {
"STAGE": "dev",
"PROJECT": "dummy-name"
}
}
}
.gitlab-ci.yml
image:
python:3.6
stages:
- test
- deploy
variables:
AWS_DEFAULT_REGION: "eu-central-1"
# variables set in gitlab's web gui:
# AWS_ACCESS_KEY_ID
# AWS_SECRET_ACCESS_KEY
before_script:
# adding pip cache
- export PIP_CACHE_DIR="/home/gitlabci/cache/pip-cache"
.zappa_virtualenv_setup_template: &zappa_virtualenv_setup
# `before_script` should not be overriden in the job that uses this template
before_script:
# creating virtualenv because zappa MUST have it and activating it
- pip install virtualenv
- virtualenv ~/zappa
- source ~/zappa/bin/activate
# installing requirements in virtualenv
- pip install -r requirements.txt
test code:
stage: test
before_script:
# installing testing requirements
- pip install -r requirements_testing.txt
script:
- py.test
test package:
<<: *zappa_virtualenv_setup
variables:
ZAPPA_STAGE: prod
stage: test
script:
- zappa package $ZAPPA_STAGE
deploy to production:
<<: *zappa_virtualenv_setup
variables:
ZAPPA_STAGE: prod
stage: deploy
environment:
name: production
script:
# creating aws credentials file
- mkdir -p ~/.aws
- echo "[default]" >> ~/.aws/credentials
- echo "aws_access_key_id = "$AWS_ACCESS_KEY_ID >> ~/.aws/credentials
- echo "aws_secret_access_key = "$AWS_SECRET_ACCESS_KEY >> ~/.aws/credentials
# try to update, if the command fails (probably not even deployed) do the initial deploy
- zappa update $ZAPPA_STAGE || zappa deploy $ZAPPA_STAGE
after_script:
- rm ~/.aws/credentials
only:
- master
I haven't used zappa in a while, but I remember that a lot of errors that were caused by bad AWS credentials, but zappa reporting something else.

Docker container quit unexpectedly...(Error in deploying docker to Elastic Beanstalk)

I am trying to deploy a docker container to Elastic Beanstalk in AWS. I'm repeatedly getting error while doing so and each time the error is related to the ENTRYPOINT that I specified in the dockerrun.aws.json. What am I doing wrong here ?
The webapp uses Django, python3 and keras.
This is my Dockerfile content:
# reference: https://hub.docker.com/_/ubuntu/
FROM ubuntu:18.04
RUN apt-get update && apt-get install \
-y --no-install-recommends python3 python3-virtualenv
# Adds metadata to the image as a key value pair example LABEL
version="1.0"
LABEL maintainer="Amir Ashraff <amir.ashraff#gmail.com>"
##Set environment variables
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m virtualenv --python=/usr/bin/python3 $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# Install dependencies:
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Open Ports for Web App
EXPOSE 8000
WORKDIR /manage.py
COPY . /manage.py
RUN chmod +x /manage.py
ENTRYPOINT [ "python3" ]
CMD [ "python3", "manage.py runserver 0.0.0.0:8000" ]
And this is the dockerrun.aws.json content:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": ""
}
],
"Volumes": [
{
"HostDirectory": "/~/aptos",
"ContainerDirectory": "/aptos/diabetes_retinopathy_recognition"
}
],
"Logging": "/aptos/diabetes_recognition",
"Entrypoint": "/opt/venv/bin/python3",
"Command": "python3 manage.py runserver 0.0.0.0:8000"
}
And this is the error from AWS logs:
Docker container quit unexpectedly on Tue Aug 20 13:03:47 UTC 2019:
/opt/venv/bin/python3: can't open file 'python3': [Errno 2] No such file
or directory.

Using ec2_snapshot getting error message "boto required for this module"

I am using Ubuntu Server 16.4 with ansible 2.4 on AWS
My playbook is trying to take snapshots of ec2 vol. See below
- hosts: localhost
connection: local
become: yes
become_method: sudo
gather_facts: yes
any_errors_fatal: True
- name: Take snapshots of all volume"
ec2_snapshot:
volume_id: "{{item.id}}"
description: "Taken on {{ ansible_date_time.date }}"
snapshot_tags:
frequency: hourly
with_items: "{{ aws_ec2_vol_setting }}"
I run the playbook with the following cmd
ansible-playbook -vvv pb_aws_backup_nw_us_sat.yml
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/amazon/ec2_snapshot.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: r_ansible
<127.0.0.1> EXEC /bin/sh -c 'echo ~ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/r_ansible/.ansible/tmp/ansible-tmp-1517934063.54-197986659054036 `" && echo ansible-tmp-1517934063.54-197986659054036="` echo /home/r_ansible/.ansible/tmp/ansible-tmp-1517934063.54-197986659054036 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmprERpt_ TO /home/r_ansible/.ansible/tmp/ansible-tmp-1517934063.54-197986659054036/ec2_snapshot.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/r_ansible/.ansible/tmp/ansible-tmp-1517934063.54-197986659054036/ /home/r_ansible/.ansible/tmp/ansible-tmp-1517934063.54-197986659054036/ec2_snapshot.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-ociuiywkpfvurbbesjwxhczxoglttlsa; /usr/bin/python /home/r_ansible/.ansible/tmp/ansible-tmp-1517934063.54-197986659054036/ec2_snapshot.py; rm -rf "/home/r_ansible/.ansible/tmp/ansible-tmp-1517934063.54-197986659054036/" > /dev/null 2>&1'"'"' && sleep 0'
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/amazon/ec2_snapshot.py
<127.0.0.1> EXEC /bin/sh -c 'echo ~ && sleep 0'
The full traceback is:
File "/tmp/ansible_gUIlz4/ansible_module_ec2_snapshot.py", line 127, in <module>
import boto.ec2
failed: [localhost] (item={u'vol': u'vol-us-sat-01', u'id': u'vol-0b6aaa3b8289580f6', u'server': u'us-nv-sat-01'}) => {
"changed": false,
"invocation": {
"module_args": {
"aws_access_key": null,
"aws_secret_key": null,
"description": null,
"device_name": null,
"ec2_url": null,
"instance_id": null,
"last_snapshot_min_age": 0,
"profile": null,
"region": null,
"security_token": null,
"snapshot_id": null,
"snapshot_tags": {
"frequency": "hourly"
},
"state": "present",
"validate_certs": true,
"volume_id": "vol-0b6aaXXXXXXXX",
"wait": true,
"wait_timeout": 0
}
},
"item": {
"id": "vol-0b6aXXXXXXXXXX",
"server": "us-nv-sat-01",
"vol": "vol-us-sat-01"
},
"msg": "boto required for this module"
}
Note: The access key and secret key are empty because i am using a IAM role that is assign to the server.
I have check my host and to me at least it looks like i have all the requirements.
$ which python
/usr/bin/python
$ pip list boto | grep boto
boto (2.48.0)
boto3 (1.5.23)
botocore (1.8.37)
$ python -V
Python 2.7.12
python modules all seem to be there and importing fine is well
$ python
Python 2.7.12 (default, Dec 4 2017, 14:50:18)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import boto
>>> import boto.ec2
>>> import boto3
So i am not sure why i am getting the error "boto required for this module".
I have also tried without any success the solution suggested here but I still get the issue.
It might be the case that the pip packages you are seeing somehow are not able to run properly with the default python that is installed (if you can run sudo pip freeze | grep boto you can see the full list of packages), if boto is not there maybe you'd need to install it using sudo pip install. On the other hand something I'd rather do is to set up a python virtual env and install all packages Ansible requires only in that isolated env, like:
Dependencies:
sudo apt-get install python-setuptools
sudo apt-get install python-pip
sudo pip install virtualenv
Then create the virtual env:
virtualenv ansible_vEnv
Activate the virtual env:
source ansible_vEnv/bin/activate
Then just install all python dependencies with pip for the ec2.py:
boto > 2.45
boto3 > 1.5 (not sure if this one's required though)
botocore
Hope it helps!
I would suggest you uninstall ansible with apt-get and install it with pip.
sudo apt uninstall ansible
sudo apt install gcc python-dev python-pip
sudo pip install --upgrade PyCrypto ansible awscli boto boto3 ansible-role-manager ansible-playbook-debugger retry
What #Konstantin Suvorov suggest was correct. Boto was not install for root/sudo
Not sure how I did it but pip and boto were install under the local user at: ~/.local/lib/python2.7/site-packages/
When i login as root and try to run pip to install boto i go the following error:
root$ pip
Traceback (most recent call last):
File "/usr/local/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2927, in <module>
#_call_aside
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2913, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2940, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 637, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 650, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 829, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'pip==9.0.1' distribution was not found and is required by the application
to find what version of pip root was using I did the following: $python -c "import pip; print(pip.__version__)
and found it was pip version 8.1.1. I then updated the following file: $vi /usr/local/bin/pip
#!/usr/bin/python
# EASY-INSTALL-ENTRY-SCRIPT: 'pip==9.0.1','console_scripts','pip'
__requires__ = 'pip==9.0.1'
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.exit(
load_entry_point('pip==9.0.1', 'console_scripts', 'pip')()
)
and change all reference of 9.0.1 to 8.1.1.
This allowed pip to work while logged in as root. I then updated pip using $"sudo -H pip install --upgrade pip and the checking with $python -c "import pip; print(pip.__version__) shows the correct version for root and ansible user
This meant all pip installed on the box are now the same version. I am sure this is not a good solution and would be better for all users (ansible and root) to point to the same pip install but i am not sure how to do this.
with pip working i was able to install boto and boto3 pip install boto and pip install boto3

Docker Port Forwarding

I am trying to run my django application in a docker container. I am using uWSGI to serve the django application, and also have a celery worker running in the background. These processes are started by supervisord.
The problem that I am having is that I am unable to see the application on the port that I would expect to see it on. I am exposing port 8080 and running the uwsgi process on 8080, but cannot find my application in a browser at the ip address $(boot2docker ip):8080. I just get Google Chrome's 'This webpage is not available'. (I am using a Mac, so I need to get the boot2docker ip address). The container is clearly running, and reports that my uwsgi and celery processes are both successfully running as well.
When I run docker exec CONTAINER_ID curl localhost:8080 I get a response like
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 21 0 21 0 0 3150 0 --:--:-- --:--:-- --:--:-- 3500
... so it seems like the container is accepting connections on port 8080.
When I run docker exec CONTAINER_ID netstat -lpn |grep :8080 I get tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 11/uwsgi
When I run docker inspect CONTAINER_ID I get the following:
[{
"AppArmorProfile": "",
"Args": [
"-c",
"/home/docker/code/supervisor-app.conf"
],
"Config": {
"AttachStderr": true,
"AttachStdin": false,
"AttachStdout": true,
"Cmd": [
"supervisord",
"-c",
"/home/docker/code/supervisor-app.conf"
],
"CpuShares": 0,
"Cpuset": "",
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"env=staging"
],
"ExposedPorts": {
"8080/tcp": {}
},
"Hostname": "21443d8a16df",
"Image": "vitru",
"Memory": 0,
"MemorySwap": 0,
"NetworkDisabled": false,
"OnBuild": null,
"OpenStdin": false,
"PortSpecs": null,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Created": "2014-12-27T01:00:22.390065668Z",
"Driver": "aufs",
"ExecDriver": "native-0.2",
"HostConfig": {
"Binds": null,
"CapAdd": null,
"CapDrop": null,
"ContainerIDFile": "",
"Devices": [],
"Dns": null,
"DnsSearch": null,
"ExtraHosts": null,
"Links": null,
"LxcConf": [],
"NetworkMode": "bridge",
"PortBindings": {},
"Privileged": false,
"PublishAllPorts": false,
"RestartPolicy": {
"MaximumRetryCount": 0,
"Name": ""
},
"SecurityOpt": null,
"VolumesFrom": null
},
"HostnamePath": "/mnt/sda1/var/lib/docker/containers/21443d8a16df8e2911ae66d5d31341728d76ae080e068a5bb1dd48863febb607/hostname",
"HostsPath": "/mnt/sda1/var/lib/docker/containers/21443d8a16df8e2911ae66d5d31341728d76ae080e068a5bb1dd48863febb607/hosts",
"Id": "21443d8a16df8e2911ae66d5d31341728d76ae080e068a5bb1dd48863febb607",
"Image": "de52fbada520519793e348b60b608f7db514eef7fd436df4542710184c1ecb7f",
"MountLabel": "",
"Name": "/suspicious_fermat",
"NetworkSettings": {
"Bridge": "docker0",
"Gateway": "172.17.42.1",
"IPAddress": "172.17.0.87",
"IPPrefixLen": 16,
"MacAddress": "02:42:ac:11:00:57",
"PortMapping": null,
"Ports": {
"8080/tcp": null
}
},
"Path": "supervisord",
"ProcessLabel": "",
"ResolvConfPath": "/mnt/sda1/var/lib/docker/containers/21443d8a16df8e2911ae66d5d31341728d76ae080e068a5bb1dd48863febb607/resolv.conf",
"State": {
"ExitCode": 0,
"FinishedAt": "0001-01-01T00:00:00Z",
"Paused": false,
"Pid": 16230,
"Restarting": false,
"Running": true,
"StartedAt": "2014-12-27T01:00:22.661588847Z"
},
"Volumes": {},
"VolumesRW": {}
}
]
As someone not terribly fluent in Docker, I'm not really sure what all that means. Maybe there is a clue in there as to why I cannot connect to my server?
Here is my Dockerfile, so you can see if I'm doing anything blatantly wrong in there.
FROM ubuntu:14.04
# Get most recent apt-get
RUN apt-get -y update
# Install python and other tools
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
RUN apt-get install -y python3 python3-dev python-distribute
RUN apt-get install -y nginx supervisor
# Get Python3 version of pip
RUN apt-get -y install python3-setuptools
RUN easy_install3 pip
RUN pip install uwsgi
RUN apt-get install -y python-software-properties
# Install GEOS
RUN apt-get -y install binutils libproj-dev gdal-bin
# Install node.js
RUN apt-get install -y nodejs npm
# Install postgresql dependencies
RUN apt-get update && \
apt-get install -y postgresql libpq-dev && \
rm -rf /var/lib/apt/lists
# Install pylibmc dependencies
RUN apt-get update
RUN apt-get install -y libmemcached-dev zlib1g-dev libssl-dev
ADD . /home/docker/code
# Setup config files
RUN ln -s /home/docker/code/supervisor-app.conf /etc/supervisor/conf.d/
# Create virtualenv and run pip install
RUN pip install -r /home/docker/code/vitru/requirements.txt
# Create directory for logs
RUN mkdir -p /var/logs
# Set environment as staging
ENV env staging
EXPOSE 8080
# The supervisor conf file starts uwsgi on port 8080 and starts a celeryd worker
CMD ["supervisord", "-c", "/home/docker/code/supervisor-app.conf"]
I believe the problem you have is that EXPOSE only makes the ports available between containers... not to the host system. See docs here:
https://docs.docker.com/reference/builder/#expose
You need to "publish" the port via the -p flag for docker run command:
https://docs.docker.com/reference/run/#expose-incoming-ports
There is a similar distinction in Fig, if you were using it, between expose and ports directives in the fig.yml file.

How do you mock OpsWorks specific services/dependencies when developing locally with Kitchen and Chef?

I'm writing Chef wrappers around some of the built in OpsWorks cookbooks. I'm using Berkshelf to clone the OpsWorks cookbooks from their github repo.
This is my Berksfile:
source 'https://supermarket.getchef.com'
metadata
def opsworks_cookbook(name)
cookbook name, github: 'aws/opsworks-cookbooks', branch: 'release-chef-11.10', rel: name
end
%w(dependencies scm_helper mod_php5_apache2 ssh_users opsworks_agent_monit
opsworks_java gem_support opsworks_commons opsworks_initial_setup
opsworks_nodejs opsworks_aws_flow_ruby
deploy mysql memcached).each do |cb|
opsworks_cookbook cb
end
My metadata.rb:
depends 'deploy'
depends 'mysql'
depends 'memcached'
The problem is, when I try to override attributes that depend on the opsworks key in the node hash, I get a:
NoMethodError
-------------
undefined method `[]=' for nil:NilClass
OpsWorks has a whole bunch of pre-recipe dependencies that create these keys and do a lot of their setup. I'd like to find a way to either pull in those services and run them against my Kitchen instances or mock them in a way that I can actually test my recipes.
Is there a way to do this?
I would HIGHLY recommend you check out Mike Greiling's blog post Simplify OpsWorks Development With Packer and his github repo opsworks-vm which help you to mock the entire opsworks stack including the install of the opsworks agent so you can also test app deploy recipes, multiple layers, multiple instances at the same time, etc . It is extremely impressive.
Quick Start on Ubuntu 14.04
NOTE: This can NOT be done from an ubuntu virtual machine because virtualbox does not support nested virtualization of 64-bit machines.
Install ChefDK
mkdir /tmp/packages && cd /tmp/packages
wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.8.1-1_amd64.deb
sudo dpkg -i chefdk_0.8.0-1_amd64.deb
cd /opt/chefdk/
chef verify
which ruby
echo 'eval "$(chef shell-init bash)"' >> ~/.bash_profile && source ~/.bash_profile
Install VirtualBox
echo 'deb http://download.virtualbox.org/virtualbox/debian vivid contrib' > /etc/apt/sources.list.d/virtualbox.list
wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
sudo apt-get update -qqy
sudo apt-get install virtualbox-5.0 dkms
Install Vagrant
cd /tmp/packages
wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.4_x86_64.deb
sudo dpkg -i vagrant_1.7.4_x86_64.deb
vagrant plugin install vagrant-berkshelf
vagrant plugin install vagrant-omnibus
vagrant plugin list
Install Packer
mkdir /opt/packer && cd /opt/packer
wget https://dl.bintray.com/mitchellh/packer/packer_0.8.6_linux_amd64.zip
unzip packer_0.8.6_linux_amd64.zip
echo 'PATH=$PATH:/opt/packer' >> ~/.bash_profile && source ~/.bash_profile
Build Mike Greiling's opsworks-vm virtualbox image using Packer
mkdir ~/packer && cd ~/packer
git clone https://github.com/pixelcog/opsworks-vm.git
cd opsworks-vm
rake build install
This will install a new virtualbox vm to ~/.vagrant.d/boxes/ubuntu1404-opsworks/
To mock a single opsworks instance, create a new Vagrantfile like so:
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu1404-opsworks"
config.vm.provision :opsworks, type: 'shell', args: 'path/to/dna.json'
end
The dna.json file path is set relative to the Vagrantfile and should contain any JSON data you wish to send to OpsWorks Chef.
For example:
{
"deploy": {
"my-app": {
"application_type": "php",
"scm": {
"scm_type": "git",
"repository": "path/to/my-app"
}
}
},
"opsworks_custom_cookbooks": {
"enabled": true,
"scm": {
"repository": "path/to/my-cookbooks"
},
"recipes": [
"recipe[opsworks_initial_setup]",
"recipe[dependencies]",
"recipe[mod_php5_apache2]",
"recipe[deploy::default]",
"recipe[deploy::php]",
"recipe[my_custom_cookbook::configure]"
]
}
}
To mock multiple opsworks instances and include layers see his AWS OpsWorks "Getting Started" Example which includes the stack.json below.
Vagrantfile (for multiple instances)
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu1404-opsworks"
# Create the php-app layer
config.vm.define "app" do |layer|
layer.vm.provision "opsworks", type:"shell", args:[
'ops/dna/stack.json',
'ops/dna/php-app.json'
]
# Forward port 80 so we can see our work
layer.vm.network "forwarded_port", guest: 80, host: 8080
layer.vm.network "private_network", ip: "10.10.10.10"
end
# Create the db-master layer
config.vm.define "db" do |layer|
layer.vm.provision "opsworks", type:"shell", args:[
'ops/dna/stack.json',
'ops/dna/db-master.json'
]
layer.vm.network "private_network", ip: "10.10.10.20"
end
end
stack.json
{
"opsworks": {
"layers": {
"php-app": {
"instances": {
"php-app1": {"private-ip": "10.10.10.10"}
}
},
"db-master": {
"instances": {
"db-master1": {"private-ip": "10.10.10.20"}
}
}
}
},
"deploy": {
"simple-php": {
"application_type": "php",
"document_root": "web",
"scm": {
"scm_type": "git",
"repository": "dev/simple-php"
},
"memcached": {},
"database": {
"host": "10.10.10.20",
"database": "simple-php",
"username": "root",
"password": "correcthorsebatterystaple",
"reconnect": true
}
}
},
"mysql": {
"server_root_password": "correcthorsebatterystaple",
"tunable": {"innodb_buffer_pool_size": "256M"}
},
"opsworks_custom_cookbooks": {
"enabled": true,
"scm": {
"repository": "ops/cookbooks"
}
}
}
For those not familiar with vagrant you just do a vagrant up to start the instance(s). Then you can modify your cookbook locally and any changes can be applied by re-running chef against the existing instance(s) with vagrant provision. You can do a vagrant destroy and vagrant up to start from scratch.
You'll have to do it manually. You can add arbitrary attributes to your .kitchen.yml, just go on an OpsWorks machine and log the values you need and either use them directly or adapt them to workable test data.