I got it working locally.
In aws i am getting an error.
Locally I use
run_lambda_function.rb
require_relative 'lambda_function'
lambda_handler(event: {}, context: Object.new)
Which calls
lambda_function.rb
require 'aws-sdk-lambda'
require 'json'
require 'rspec'
require 'capybara'
require 'capybara/rspec'
require 'webdrivers'
def lambda_handler(event:, context:)
##short_sleep = 1 # just for these viewing and debugging purposes :)
Capybara.app_host = 'https://google.com'
RSpec::Core::Runner.run(['spec/google_spec.rb']) #, $stderr, $stdout)
end
which uses spec:
spec/google_spec.rb
describe 'Visi Websites', type: :feature do
it 'can visit google' do
visit '/'
expect(page).to have_css('div')
sleep ##short_sleep
end
it 'can visit gogole/forms' do
visit '/forms'
expect(page).to have_css('div')
sleep ##short_sleep
end
end
This runs locally but when I bundle to code to vendor/ and zip it all up, upload it to lambda (via S3 bucket due to size > 50k for dependencies*) and try to run it in aws mgtmt console I get an error in webdrivers:
I might be able to avoid with serverless and other approaches perhaps but I am trying to stay as simple and low-level without dependencies and aids while I am learning. Within reason of course. No hoops.
dependencies
Gemfile
for locally bundling while testing, not relevant (I think) to uploaded code as I bundled that to /vendor and zipped it all (hence the large size and need for load via s3 bucket)
source 'http://rubygems.org'
gem 'rspec'
gem 'webdrivers'
gem 'capybara'
gem 'aws-sdk'
It seems you are trying to run GUI based testing in Lambda environment. Lambda does not have access to display devices. You should try to run your test cases in the headless mode.
Related
I have a Rails 4.2.4 (Ruby 2.2.2) application and I am serving static assets via Cloudfront.
If Cloudfront is serving something you don't want, there are two possibilities:
Invalidate the content in Cloudfront or
Change the name of the asset served
However, when i change
Rails.application.config.assets.version = '1.0'
to
Rails.application.config.assets.version = '2.0'
(in config/initializers/assets.rb)
and
delete all the assets in public/assets
run "RAILS_ENV=staging bundle exec rake assets:precompile"
the same file names are generated!
The only way i found to invalidate the digested file of application.scss was to add some dummy content in order to provoke a new md5 checksum.
What am i doing wrong?
Shouldn't a new assets.version change the digested file names?
Best Regards and thanx!
As per the comments in the Rails pull request I opened, this is a regression that needs to be fixed: https://github.com/rails/sprockets-rails/issues/240
Update: As sansarp mentions, one of the workarounds listed in that github issue is to use an old version of sprockets:
gem 'sprockets', '< 3.0.0'
Another workaround is to use the asset path as a cache breaker instead:
# config/initializers/assets.rb
Rails.application.config.assets.prefix = "/assets/v1"
Using sprockets of previous version could help you to get file names as expected. gem 'sprockets', '< 3.0.0' https://github.com/rails/sprockets
If you use capistrano for deployment be sure to set assets_prefix in the deploy.rb file.
set :assets_prefix, "assets/v1"
I am looking for alternative methods of deploying a Play application to Elastic Beanstalk. It is a single page app that relies on Ember.js. It would be nice to be able to edit the the contents of the /public folder so I don't need to rebuild the docker image every time something is fixed on the Ember side that doesn't affect the Play app itself.
I am currently using sbt's docker:stage command and zipping the generated docker folder along with this Dockerfile and Dockerrun.
Dockerfile
FROM java:latest
WORKDIR /opt/docker
ADD stage /
RUN ["chown", "-R", "daemon:daemon", "."]
EXPOSE 9000
USER daemon
ENTRYPOINT ["bin/myapp", "-Dconfig.resource=application-prod.conf"]
CMD []
Dockerrun
{
"AWSEBDockerrunVersion": "1",
"Ports": [{ "ContainerPort": "9000" }],
"Volumes": []
}
Once I zip the file I upload it using Beanstalk console. But this involves rebuilding the app every time a typo is fixed on the front end. It is annoying because it means all the updated front end code has to wait until I get a chance to push it up so the boss can view it and give feedback. It would be nice if there was a way to have the /public folder (Play just serves /public/index.html) accessible so the front end dev could access it directly for his edits.
Ideally I would like some method that can be used for both development and production. I don't know the requirements imposed by Beanstalk so it can properly spin up extra instances of the app when needed. Maybe something where when the instance starts it does git pull on the backend repo and git pull on the front end repo, then runs my custom build script for ember to generate the /dist folder and move into Play's /public folder and create gzips of each file. Then start the play app. Then let the front end Dev ssh into the development instance and do git pull and ember build as needed for his edits.
It would also be nice for the development server for the Play server to be run using run or ~run so I can just do git pull and have it rebuild the backend.
Or maybe I am approaching this in the completely wrong way. I have never done any of this before so I am sort of guessing my way through all of it.
Thanks for any suggestions and pointers in the correct direction.
Adam
Edit
Since we are really only using Play as a RESTful API would it be better to just run a nginx/Apache server on something like EC2 then use Beanstalk to handle the Play App without it serving any content besides API calls. I would assume the EC2 nginx could be pretty tiny since only the first access would pull files from the http server. After that it is all API calls. Then we run the Play app from Beanstalk so it can handle load balancing for the API. This at least saves me from rebuilding the image for front end edits. Would this be a more correct setup?
I cannot figure out why I am getting a 404 error at http://rachaelsalter.github.io
I am using Middleman and the middleman-deploy gem. Everything seemed to work fine with the deploy and I have an index.html file.
Any help would be very much appreciated.
You're pushing to a user repository (userName.github.io), so you need to deploy your generated code to master branch. See github pages documentation.
In your config.rb file, you must change deploy.branch variable witch is gh-pages by default, to master
activate :deploy do |deploy|
deploy.method = :git
deploy.branch = 'master'
# ... other deploy setup
end
If you're versioning your sources, you will have to move them to another branch like sources.
I am having a difficult time setting up Rails 4.2 in production on a VM running on passenger and nginx, and not using RVM or anything similar.
I got Incomplete response received from application and looking in the nginx error log it said something about missing secret_key_base and secret_key although there is no reference to that last one any where in the config directory.
I ran export SECRET_KEY_BASE='...' and in rails c production ENV["SECRET_KEY_BASE"] displays the key but after restarting nginx I still get the same error.
Placing the key directly in secrets solved that problem but is there an actual way to do this correctly?
Solution:
The solution that worked for me is to place export SECRET_KEY_BASE="<string obtained from rake secret>" in .bashrc
If you use rbenv, there is another solution below in the accepted answer.
If you are using rbenv you can add the rbenv-vars plugin and add a .rbenv-vars file containing (and don't check that into your repo)
SECRET_KEY_BASE='...'
other solution is to add the the SECRET_KEY_BASE manually to the secrets.yml file and also ignore that file from your repo.
a third answer that saw mentioned is adding
export SECRET_KEY_BASE='...'
to one of these files .bashrc .bash_profile .profile
Your config/secrets.yml should have something like
development:
secret_key_base: f91fe2e2e4a9bf8f8b6aa1c296bb9ec10f2bc91c08965176a642ea0927400651ea993512f83d9823bcc046555e40b8c257f5f19fab8c59b5a02c9d230a369fe7
test:
secret_key_base: c116ac7c8f69018d1f4e10f632cac7a22348f0bd8ed8f21ca45460574d2f501f248418bc888e31556e16ba3ab58c3a7cba027140097abe3f511dddf6625fa8cd
# Do not keep production secrets in the repository,
# instead read values from the environment.
production:
secret_key_base: <%= ENV["SECRET_KEY_BASE"] %>
To set SECRET_KEY_BASE, first you'll need to generate it with
rake secret
Then take that output and edit your /etc/environment (depending on your distro, assuming Ubuntu here) and place it as such
SECRET_KEY_BASE=...
Restart your server and you should be gravy
I wanted to use Capistrano to deploy on my django app on my webfaction server but due to my purist tendencies, I wanted to do it in Fabric, in a way that Capistrano does. The thing that I liked most about about Capistrano is that automatically retrieves a repo's content and pushes it to a server.
The fabric recipes I have seen so far required me to do things "the git way", manually entering git commands to work with the repo, etc.
Is there a way to deploy a Django app in Fabric (or any other python package) "the Capistrano" way?
Sie Note: In case I really have to work with Capistrano, is there a way to bypass the assets precompile task and the rake db:migrate task?
Ive successfully used the scripts from here to deploy to webfaction.
If you want to bypass the assets compilation, just don't write this line in your recipe :
load 'deploy/assets'
If you don't want to run migration, just never type the migration command
cap deploy:migrate
If you want to remove some other behaviors (symlink, restart, update code to server) write chosen parts from this :
namespace :deploy do
task :start do ; end
task :stop do ; end
task :restart do ; end
task :update_code do ; end #override this task to prevent capistrano to upload on servers
task :symlink do ; end #don't create the current symlink to the last release
end
for anyone who stumbles across this, here is a capistrano recipe that is so basic :
http://ygamretuta.me/2012/07/18/deploy-django-1-4-webfaction-capistrano/