How do I run my custom recipes on AWS OpsWorks? - amazon-web-services

I've created a GitHub repo for my simple custom recipe:
laravel/
|- recipes/
| - deploy.rb
|- templates/
|- default
| - database.php.erb
I've added the repo to Custom Chef Recipes as https://github.com/minkruben/Laravel-opsworks.git
I've added laravel::deploy to the deploy "cycle".
This is my deploy.rb:
node[:deploy].each do |app_name, deploy|
if deploy[:application] == "platform"
script "set_permissions" do
interpreter "bash"
user "root"
cwd "#{deploy[:deploy_to]}/current/app"
code <<-EOH
chmod -R 777 storage
EOH
end
template "#{deploy[:deploy_to]}/current/app/config/database.php" do
source "database.php.erb"
mode 0660
group deploy[:group]
if platform?("ubuntu")
owner "www-data"
elsif platform?("amazon")
owner "apache"
end
variables(
:host => (deploy[:database][:host] rescue nil),
:user => (deploy[:database][:username] rescue nil),
:password => (deploy[:database][:password] rescue nil),
:db => (deploy[:database][:database] rescue nil)
)
only_if do
File.directory?("#{deploy[:deploy_to]}/current")
end
end
end
end
When I log into the instance by SSH with the ubuntu user, app/storage folder permission isn't changed & app/config/database.php is not populated with database details.
Am I missing some critical step somewhere? there are no errors in the log.
The recipe is clearly recognized and loaded, but doesn't seem to be executed.

With OpsWorks, you have 2 options:
Use one of Amazon's built-in layers, in which case the deployment recipe is provided by Amazon and you can extend Amazon's logic with hooks: http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-extend-hooks.html
Use a custom layer, in which case you are responsible for providing all recipes including deployment: http://docs.aws.amazon.com/opsworks/latest/userguide/create-custom-deploy.html
The logic you have here looks more like a hook than a deployment recipe. Why? Because you are simply modifying an already-deployed app vs. specifying the deployment logic itself. This seems to suggest you are using one of Amazon's built-in layers and that Amazon is providing the deployment recipe for you.
If the above assumption is correct, then you are on path #1. Re-implementing your logic as a hook should do the trick.

Related

Can i get an aws lambda to run an rspec test?

I got it working locally.
In aws i am getting an error.
Locally I use
run_lambda_function.rb
require_relative 'lambda_function'
lambda_handler(event: {}, context: Object.new)
Which calls
lambda_function.rb
require 'aws-sdk-lambda'
require 'json'
require 'rspec'
require 'capybara'
require 'capybara/rspec'
require 'webdrivers'
def lambda_handler(event:, context:)
##short_sleep = 1 # just for these viewing and debugging purposes :)
Capybara.app_host = 'https://google.com'
RSpec::Core::Runner.run(['spec/google_spec.rb']) #, $stderr, $stdout)
end
which uses spec:
spec/google_spec.rb
describe 'Visi Websites', type: :feature do
it 'can visit google' do
visit '/'
expect(page).to have_css('div')
sleep ##short_sleep
end
it 'can visit gogole/forms' do
visit '/forms'
expect(page).to have_css('div')
sleep ##short_sleep
end
end
This runs locally but when I bundle to code to vendor/ and zip it all up, upload it to lambda (via S3 bucket due to size > 50k for dependencies*) and try to run it in aws mgtmt console I get an error in webdrivers:
I might be able to avoid with serverless and other approaches perhaps but I am trying to stay as simple and low-level without dependencies and aids while I am learning. Within reason of course. No hoops.
dependencies
Gemfile
for locally bundling while testing, not relevant (I think) to uploaded code as I bundled that to /vendor and zipped it all (hence the large size and need for load via s3 bucket)
source 'http://rubygems.org'
gem 'rspec'
gem 'webdrivers'
gem 'capybara'
gem 'aws-sdk'
It seems you are trying to run GUI based testing in Lambda environment. Lambda does not have access to display devices. You should try to run your test cases in the headless mode.

Set Opsworks Deployment recipes personalized for multiple apps

I have the standard PHP layer in OpsWorks Stack.
There are two application on this layer:
app1, on the domain app1.mydomain.com
app2, on the domain app2.mydomain.com
The applications run on the same servers.
I have a git repo with my deployment recipes. Everything works fine.
But now I need to personalize the deployment recipes for each app.
For example:
i need that the folder 'folder_1' of the app 'app1' is writable 777
i need that the folder 'folder_1' of the app 'app2' is readable 644
Now, I have only the recipe that runs in all the deployed apps. How can i personalize my deployment recipe to run in different ways for different apps?
Thank you in advance
Edit: here what i'd like to do:
node[:deploy].each do |app_name, deploy|
[IF APPLICATION ONE (how can i grab application variable?)]:
script "change_permissions" do
interpreter "bash"
user "root"
cwd "#{deploy[:deploy_to]}/current"
code <<-EOH
chmod -R 777 uploads
mv .htaccess_production .htaccess
EOH
end
[ELSE IF APPLICATION 2]:
script "change_permissions" do
interpreter "bash"
user "root"
cwd "#{deploy[:deploy_to]}/current"
code <<-EOH
chmod -R 755 uploads
rm .htaccess_production
EOH
end
end
If you have only two apps settings to apply, try make one as default setup and use either a chef tag or node attribute to do a switch case. I also had experience using environment variable to differentiate the route for implementation but that may not be necessary if the change is local on an instance.
Now if the change involved is significant, you should consider to place them in separate recipes and run individual one in the switch case block. It is easier to maintain later on. Hope this helps.

AWS CodePipeline advanced tutorial with jenkins

I'm running through AWS CodePipeline tutorial and there is this step
saying that I have to create a jenkins job running bash script which will connect to the EC2 instance (not the one where jenkins is running, but the one where the code has been deployed earlier).
It is said that I have to connect to the EC2 instance by running this command in bash script:
TEST_IP_ADDRESS=192.168.0.4 rake test
But my gut feeling is saying that this step is completely wrong.
There is no variable with this name, and there is no option to connect to external instance just like that.
I've completed all the steps successfully, but this one is obviously wrong
The bash script will run in your jenkins instance, and it will make an HTTP request to the instance you configured in TEST_IP_ADDRESS.
When you add the "build step", and choose "Execute shell", you'll enter this:
TEST_IP_ADDRESS=192.168.0.4 rake test
You are defining the TEST_IP_ADDRESS variable, so it's up to you to give it an appropriate value.
First I had the same confusion, then I saw the source code and it is pretty self-explained:
#!/usr/bin/env ruby
require 'net/http'
require 'minitest/autorun'
require 'socket'
class JenkinsSampleTest < MiniTest::Unit::TestCase
def setup
uri_params = {
:host => ENV['TEST_IP_ADDRESS'] || 'localhost',
:port => (ENV['TEST_PORT'] || '80').to_i,
:path => '/index.html'
}
#webpage = Net::HTTP.get(URI::HTTP.build(uri_params))
end
def test_congratulations
assert(#webpage =~ /Congratulations/)
end
end

OpsWorks Django Deployment

I am trying to deploy a Django application using AWS OpsWorks. I'm brand spanking new to any sort of DevOps work, so I'm having considerable difficulties.
I am trying to use this cookbook to automate my deployments. I need Python3.4, so I modified a few things in the cookbook. Right now during the deploy hook, I am getting an error from the following code:
# install requirements
requirements = Helpers.django_setting(deploy, 'requirements', node)
if requirements
Chef::Log.info("Installing using requirements file: #{requirements}")
pip_cmd = ::File.join(deploy["venv"], 'bin', 'pip')
execute "#{pip_cmd} install --source=#{Dir.tmpdir} -r #{::File.join(deploy[:deploy_to], 'current', requirements)}" do
cwd ::File.join(deploy[:deploy_to], 'current')
user deploy[:user]
group deploy[:group]
environment 'HOME' => ::File.join(deploy[:deploy_to], 'shared')
end
else
Chef::Log.debug("No requirements file found")
end
The error reports:
STDERR: /opt/aws/opsworks/releases/20141216163306_33300020141216163306/vendor/bundle/ruby/2.0.0/gems/mixlib-shellout-1.4.0/lib/mixlib/shellout/unix.rb:147:in `chdir': No such file or directory - /srv/www/django/current (Errno::ENOENT)
I get that this code is trying to install requirements from my requirements.txt file, but what is up with the tmp directory and the current directory? Clearly there is no current directory created when I do my deployment. What is the file structure generally like for code being pulled into OpsWorks from a deploy? Moreover, how might I go about finding a fix for this error?
I've been reading through documentation on Chef, OpsWorks, KitchenCI, Berksfile, and other technologies for days just feeling swamped by everything in the world of DevOps. I just want to get my application running!
EDIT
Custom json is:
{
"deploy" : {
"django" : {
"django_settings_template" : null,
"django_settings_file" : "settings.py",
"django_collect_static" : "true",
"python_major_version" : "3.4",
"venv_options" : "--python=$(which python3.4) --no-site-packages",
"custom_type" : "django"
}
}
}
If there is no current directory it's because it wasn't created during the deploy. Your script actually does reference that directory.
This block of code below is where you are having an error. If you refer to the documentation on the execute resource block https://docs.chef.io/resource_execute.html you'll see that if you don't provide a command to the execute resource then the name of the block will be what command is executed. So in your case you are generating the path to your pip command via Ruby's file.join so pip_cmd should be something like /usr/bin/pip and then the name of the block is the command which should be something like
#Mind you I'm not sure if the /tmp dir is correct, but if not it might also be the chef tmp directory
/usr/bin/pip install --source=/tmp -r /srv/www/django/current/requirements
Now in the resource you have the cwd attribute, which is the "current working directory" or the directory in which the command is executed. So when this command is executed it's being executed in /srv/www/django/current
pip_cmd = ::File.join(deploy["venv"], 'bin', 'pip')
execute "#{pip_cmd} install --source=#{Dir.tmpdir} -r #{::File.join(deploy[:deploy_to], 'current', requirements)}" do
cwd ::File.join(deploy[:deploy_to], 'current')
user deploy[:user]
group deploy[:group]
environment 'HOME' => ::File.join(deploy[:deploy_to], 'shared')
end
Now without knowing a bit more about your deployment I can't really tell you how to fix it without knowing a bit more about your deployment. Can you post your actual cookbook code so we can see how you are using this cookbook to deploy your code?
{
'deploy': {
'django': {
'repository': 'Your github URL',
'revision': 'your revision number'
},
}
}

Django Deploy From Github to Server Like Capistrano with Simple Fabric Recipe

I wanted to use Capistrano to deploy on my django app on my webfaction server but due to my purist tendencies, I wanted to do it in Fabric, in a way that Capistrano does. The thing that I liked most about about Capistrano is that automatically retrieves a repo's content and pushes it to a server.
The fabric recipes I have seen so far required me to do things "the git way", manually entering git commands to work with the repo, etc.
Is there a way to deploy a Django app in Fabric (or any other python package) "the Capistrano" way?
Sie Note: In case I really have to work with Capistrano, is there a way to bypass the assets precompile task and the rake db:migrate task?
Ive successfully used the scripts from here to deploy to webfaction.
If you want to bypass the assets compilation, just don't write this line in your recipe :
load 'deploy/assets'
If you don't want to run migration, just never type the migration command
cap deploy:migrate
If you want to remove some other behaviors (symlink, restart, update code to server) write chosen parts from this :
namespace :deploy do
task :start do ; end
task :stop do ; end
task :restart do ; end
task :update_code do ; end #override this task to prevent capistrano to upload on servers
task :symlink do ; end #don't create the current symlink to the last release
end
for anyone who stumbles across this, here is a capistrano recipe that is so basic :
http://ygamretuta.me/2012/07/18/deploy-django-1-4-webfaction-capistrano/