Following along with the ember tutorial and got to installing ember-cli-mirage here. After I install the add-on the server will not start. I get to "Serving on http://localhost:4200/" and then it just hangs there. When I try to kill the server via Ctrl-C it also hangs at "cleaning up...". If I remove ember-cli-mirage from package.json and node_modules then everything will start again.
I have been able to dig around to get live server and other aspects running, but this one is giving me nothing to work with. Are there any additional settings/pieces that the tutorial is missing or do I have a version/environment issue? Is there a way to get additional information at startup (I know --verbose is not one)?
Vagrant: ubuntu/trusty-64 (14.04.5 LTS)
Node: v7.5.0
NPM: v3.10.8
ember-cli: 2.11.1
After more digging this appears to have something to do with Vagrant. I ran through the same steps on my local Windows machine and it loads fine with Node v6.10.0. I then tried that version of Node on Vagrant and left it running while I went out. It ultimately took 19 minutes for the container to come up. The full time was spent in Babel. It doesn't appear to be a memory issue on the VM, not sure what the difference may be and no errors were reported.
Ember Server output from both environs:
Vagrant
vagrant#vagrant-ubuntu-trusty-64:/vagrant/sample$ ember serve
Livereload server on http://localhost:4300
Serving on http://localhost:4200/
Build successful - 1169101ms.
Slowest Nodes (totalTime => 5% ) | Total (avg)
----------------------------------------------+---------------------
Babel (19) | 1106280ms (58225 ms)
Windows
c:\projects\super-rentals>ember server
Livereload server on http://localhost:49153
Serving on http://localhost:4200/
Build successful - 24208ms.
Slowest Nodes (totalTime => 5% ) | Total (avg)
----------------------------------------------+---------------------
Babel (19) | 12332ms (649 ms)
SimpleReplace (2) | 4686ms (2343 ms)
Concat (8) | 3411ms (426 ms)
Funnel: Addon JS (11) | 1289ms (117 ms)
Related
Following the minimalist installation instructions from here, then on macOS High Sierra 10.13.1 executing:
bin/zeppelin-daemon.sh start
The daemon starts OK, but pointing any browser to http://localhost:8080 yields
HTTP ERROR: 503
Problem accessing /. Reason:
Service Unavailable
Powered by Jetty://
The same thing happens if I run as root, or if I run the browser as root, or if I install via homebrew (brew install apache-zeppelin).
Permissions problem?
What is a solution?
Thanks!
The workaround was:
Install Java 8, following How to set or change the default Java (JDK) version on OS X?, i.e.
brew tap caskroom/versions
brew cask install java8
export JAVA_HOME=`/usr/libexec/java_home -v 1.8`
Then:
sudo bash
zeppelin-daemon.sh start
/Applications/Safari.app/Contents/MacOS/Safari
Point browser to:
http://localhost:8080
Success! Conclusions:
Zeppelin 0.7.3 only supports Java <= 8.0
zeppelin-daemon.sh must be run as root, but browser doesn't have to be
Install Java 1.8 as mentioned in the above post.
If we can give the super user password at the time of installing, we do not have to run as root every time. See below for the logs and the option to give password.
[ksurendranath#machine /usr/local/Cellar/apache-zeppelin/0.7.3/libexec/logs 10:54 AM ]$ brew cask install java8
==> Tapping caskroom/cask
Cloning into '/usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask'...
remote: Counting objects: 4057, done.
remote: Compressing objects: 100% (4022/4022), done.
remote: Total 4057 (delta 37), reused 824 (delta 31), pack-reused 0
Receiving objects: 100% (4057/4057), 1.39 MiB | 11.49 MiB/s, done.
Resolving deltas: 100% (37/37), done.
Tapped 0 formulae (4,066 files, 4.4MB)
==> Creating Caskroom at /usr/local/Caskroom
==> We'll set permissions properly so we won't need sudo in the future
Password:
Get the process info on the port you are using the Zeppelin server
1)sudo netstat -anp|grep 8080
2)sudo kill (ProcessID)
3)/zeppelin-server/bin/zeppelin-daemon.sh restart
This might help you
When running ember test --host 172.17.0.2 --test-port 4450, I'm getting the following error.
Error: Browser failed to connect within 30s. testem.js not loaded?
Since I'm using a docker container I'm assuming I need to update the host and port to the open docker host and port.
This is my testem.js file
/*jshint node:true*/
module.exports = {
"framework": "qunit",
"test_page": "tests/index.html?hidepassed",
"phantomjs_debug_port": 4500,
"disable_watching": true,
"launch_in_ci": [
"PhantomJS"
],
"launch_in_dev": [
"PhantomJS",
"Chrome"
]
};
This is a general problem you'll see when testing an ember application in continuous integration environments. Multiple users have posted their experiences with the possible bug in this GitHub issue. 2 answers come to mind.
Per Testem's author, you can increase the browser connection timeout.
Compare your ember application's .travis.yml with the canonical version in the ember-new-output repository here. The ember-cli core team and community members have invested a lot of time refining and debugging that .travis.yml to make it work well with ember applications.
I am attempting to deploy some changes to a loopback app running on a remote Ubuntu box on top of strong-pm.
The changes that I make locally are not being reflected in what gets deployed to the server. Here are the commands I execute:
$slc build
$slc deploy http://IPADDRESS deploy
to which I get a successful deploy message which looks like this:
peter#peters-MacBook-Pro ~/Desktop/projects/www/places-api master slc deploy http://PADDRESS deploy
Counting objects: 5740, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (5207/5207), done.
Writing objects: 100% (5740/5740), 7.14 MiB | 2.80 MiB/s, done.
Total 5740 (delta 1555), reused 150 (delta 75)
To http://PADDRESS:8701/api/services/1/deploy/default
* [new branch] deploy -> deploy
Deployed `deploy` as `placesAPI` to `http://IPADDRESS:8701/`
Checking the deployed files on the server here :
/var/lib/strong-pm/svc/1/work
I can see that the changes I made to the local app are not reflected in what has just been deployed to the server.
In order to check that the changes are reflected in the build, I checked out the deploy git repository, like so:
git checkout deploy
Inspecting the files here, I can see that the changes I made are present.
**does anyone know why the changes are not reflected in what is deployed to the server ? **
I know this is a old post but for anyone getting this issue I just encountered the same problem.
Finally I used slc arc and tried to Build from there.
Make sure that the "Fully qualified path to archive" has a correct value
It should be something like
../project-1.0.0.tgz
Loved the previous question at:
500 Internal Server Error - ActionView::Template::Error in Rails Production
I get the same error when browsing the git tree via the web (internal 500), but the answer there said I should run
bundle exec rake assets:precompile
and referred me to
http://guides.rubyonrails.org/asset_pipeline.html#in-production
I am running GitLab 7.6.1 0286222 on Ubuntu 14.04 LTS fully up to date. That allows me to push and pull from local git machines fine and look around via the web service as well. I ran the revised assets:precompile as suggested below, but the problem continues for me.
So as to my specific error. In the production log I get:
git#git01:~/gitlab/log$ tail -n 20 production.log
Started GET "/chef/cheftest/tree/master/cookbooks" for 127.0.0.1 at 2014-12-24 16:03:25 -0500
Processing by Projects::TreeController#show as HTML
Parameters: {"project_id"=>"chef/cheftest", "id"=>"master/cookbooks"}
Completed 500 Internal Server Error in 490ms
ActionView::Template::Error (undefined method `[]' for nil:NilClass):
1: - tree, commit = submodule_links(submodule_item)
2: %tr{ class: "tree-item" }
3: %td.tree-item-file-name
4: %i.fa.fa-archive
app/models/repository.rb:162:in `method_missing'
app/models/repository.rb:228:in `submodule_url_for'
app/helpers/submodule_helper.rb:6:in `submodule_links'
app/views/projects/tree/_submodule_item.html.haml:1:in `_app_views_projects_tree__submodule_item_html_haml___742655240099390426_69818877669240'
app/helpers/tree_helper.rb:19:in `render_tree'
app/views/projects/tree/_tree.html.haml:42:in `_app_views_projects_tree__tree_html_haml__47884322835133800_69818822684460'
app/views/projects/tree/show.html.haml:9:in `_app_views_projects_tree_show_html_haml__1575471590709486656_69818822138660'
app/controllers/projects/tree_controller.rb:13:in `show'
I would be happy to run any commands and edit any configuration files as needed, but please let me know where the files are and how to run the commands. Thanks for your help with this.
I am currently working on a UI project for my team. After building a project on Jenkins, we want to trigger acceptance tests to run. On my local machine, I am able to do so by activating a server.py with the command:
python server.py
After the server is up and running, I can run the acceptance test folder that I have written with the command:
pybot acceptance_tests
I am now trying to migrate my tests from my local machine to Jenkins. What I cannot figure out is how I am able to run the server (server.py) on Jenkins. I am relatively new to Jenkins, so any details will be great!
Instead of starting the server via jenkins, have the test start and stop the server. This will give you the same behavior regardless of how you run your tests (ie: via jenkins for manually from the command line).
Robot has a library named Process which you can use for starting and stopping processes. You can use the Start Process and Terminate Process keywords to start and stop the webserver via a suite setup and suite teardown. It would look something like this:
*** Settings ***
| Library | Process
| Suite Setup | Start the webserver
| Suite Teardown | Stop the webserver
*** Keywords ***
| Start the webserver
| | ${server process}= | Start process | python | server.py
| | Set suite variable | ${server process}
| Stop the webserver
| | Terminate Process | ${server process}
Of course, you'll want to add some bullet proofing such as making sure the server actually starts, and possibly catching errors if it doesn't exit cleanly. You may need to give an explicit path to server.py, but hopefully this gives the general idea.