When I want to deploy my app to my enviroment I have to create a single file in a specific, containing the hole app (most of it BASE64 encoded) and import that file into a proprietary application.
I've created a grunt task that can easily generate that file form a folder. So I'm looking for a way to just type something into the console and then it should execute ember build and my script.
The simplest way to do this is just to create a brand new Gruntfile.js nearby the existing Brocfile.js and a batch file to run first grunt and then ember build.
A better way would be if I could call ember build from my gruntfile. Is there a way to do this?
Or, even better, is there a way to inject a grunt into the ember build? That would be awesome!
To be clear, broccoli is not the right tool for that! Its not a build step, but a deployment step! So I want to use the task runner, not the build tool.
Thanks!
You could potentially use grunt-exec to execute ember build, as part of a chain of grunt build tasks.
It allows you to execute arbitrary shell commands.
Something like the following might work:
grunt.initConfig({
exec: {
ember_build: {
command: 'ember build'
}
}
});
and then execute with grunt exec:ember_build or as part of a larger task. (Note that I haven't tried this, but it should work!)
This might be slight overkill, you could just chain your console commands:
ember build && grunt
Related
I am new to Jenkins, specially with using python script in Jenkins. The problem I am facing is as follow:
I am trying to run a python script from a python file in the post-build step of the Jenkins. I have added all the plugins required for that purpose to my understanding. i.e I have included Post-BuildScript plugin, python jenkins plugin etc.
Now when I build console output shows invalid script command caused the failure. I have attached the results below. can anybody help me with that please?
In post build step I am providing the full or absolute path to the python script file i.e
ExecutepythonScriptpath
Results
It may be useful to mention here I have also tried using just the path without writing python preceding the path, also tried with forward as well as backward slash in the path. without any success.
I have managed to resolve that issue. There are two parts of solution:
First one is if you want to run simple python script in post-build -->Add a post build step for Execute python Script (That will require you install plugin for post build ) . In that window created after adding post build step you can simply put any python command to run.
Second part of the solution is for, when user would like to run a list of commands from a python script file from the same post build step window in that case user has to make sure to put all the required python files which you want to execute into the Jenkins workspace->project directory(project for which we are running the Jenkins ) .
Moreover, for Python2.7 in order to execute that python script file user simply need to write script as
execfile(file.py)
One more thing to remember is insert python.exe path in the environment variables.
GitHub's Google Cloud Build integration does not detect a cloudbuild.yaml or Dockerfile if it is not in the root of the repository.
When using a monorepo that contains multiple cloudbuild.yamls, how can GitHub's Google Cloud Build integration be configured to detect the correct cloudbuild.yaml?
File paths:
services/api/cloudbuild.yaml
services/nginx/cloudbuild.yaml
services/websocket/cloudbuild.yaml
Cloud Build integration output:
You can do this by adding a cloudbuild.yaml in the root of your repository with a single gcr.io/cloud-builders/gcloud step. This step should:
Traverse each subdirectory or use find to locate additional cloudbuild.yaml files.
For each found cloudbuild.yaml, fork and submit a build by running gcloud builds submit.
Wait for all the forked gcloud commands to complete.
There's a good example of one way to do this in the root cloudbuild.yaml within the GoogleCloudPlatform/cloud-builders-community repo.
If we strip out the non-essential parts, basically you have something like this:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
for d in */; do
config="${d}cloudbuild.yaml"
if [[ ! -f "${config}" ]]; then
continue
fi
echo "Building $d ... "
(
gcloud builds submit $d --config=${config}
) &
done
wait
We are migrating to a mono-repo right now, and I haven't found any CI/CD solution that handles this well.
The key is to not only detect changes, but also any services that depend on that change. Here is what we are doing:
Requiring every service to have a MAKEFILE with a build command.
Putting a cloudbuild.yaml at the root of the mono repo
We then run a custom build step with this little tool (old but still seems to work) https://github.com/jharlap/affected which lists out all packages have changed and all packages that depend on those packages, etc.
then the shell script will run make build on any service that is affected by the change.
So far it is working well, but I totally understand if this doesn't fit your workflow.
Another option many people use is Bazel. Not the most simple tool, but especially great if you have many different languages or build processes across your mono repo.
You can create a build trigger for your repository. When setting up a trigger with cloudbuild.yaml for build configuration, you need to provide the path to the cloudbuild.yaml within the repository.
I'm trying to figure out how to get code coverage working with #angular/cli but so far i'm not having much luck.
I started a new project using angular CLI. Basically all i did was ng new test-coverage and once everything was installed in my new project folder, I did a ng test --code-coverage. The tests were run successfully but nothing resembling code coverage was displayed in the browser.
Am I missing some dependencies or something else? Any help will be appreciated.
EDIT:
R. Richards and Rachid Oussanaa were right, the file does get generated and I can access it by opening the index.html.
Now i'm wondering, is there a way I could integrate that into a node command so that the file opens right after the tests are run?
here's what you can do:
install opn-cli which is a cli for the popular opn package which is a cross-platform tool used to open files in their default apps.
npm install -D opn-cli -D to install as dev dependency.
in package.json add a script under scripts as follows
"scripts": {
...
"test-coverage": "ng test --code-coverage --single-run && opn ./coverage/index.html"
}
now run npm run test-coverage
this will run the script we defined. here is an explanation of that script:
ng test --code-coverage --single-run will run tests, with coverage, only ONCE, hence --single-run
&& basically executes the second command if the first succeeds
opn ./coverage/index.html will open the file regardless of platform.
I have an open source project that I'd like to test using travis-ci. Sadly it is rather flaky and I'd like to know the reason. The tests write very verbose log files, so I'd like to export these, upon failure to Github Gist. There are command line tools that allow me to do that, gist-paste for instance, however I don't know how to run them only upon failure and without overriding the return code of the unittests, i.e., I'd still like travis-ci to notice the failure.
Great idea. Travis CI has a step in the build lifecycle called after_failure which will be fired after your testing script step has run and failed, as outlined on the "customizing the build" documentation page here.
In your .travis.yml file, you would add an after_failure step with a call to your gist-paste command, after setting the relevant Git tokens as encrypted environment variables I assume. You can then access plenty of information about the build from the many environment variables set by Travis during the build.
Coming from a JS / Node development background, I like to use Grunt for a lot of my automation. For a recent project I picked up some baby Django, to get a feel for how it operated, but still wanted to integrate Grunt for some of my workflow.
I am currently starting my Django server via Grunt, using the spawn-shell module. This works just fine, but I am also using a virtualenv setup, and would as well like to start that up via Grunt.
The command I am using to start the virtual enviornment is:
source ./venv/bin/activate
Which works just fine from the terminal command line as is. However, executing this command from either grunt shell or grunt exec does nothing. I get no errors from Grunt (it says running, then done without errors), but nothing gets started.
The grunt exec command is as follows:
exec: {
start: {
cmd: function() {
return "source ./venv/bin/activate";
}
}
}
And the shell command is:
shell: {
start: {
command: 'source ./venv/bin/activate',
options: {
stdout: true
}
}
}
Any ideas on how to get this working? Or is it not possible, and I should just resort to entering the command manually at start?
Normally when trying to get other tooling to launch django while using a virtualenv the normal thing is to perform both of the following in one command :
activate the virtualenv
run the command you want.
so this ends up being :
. ${VIRTUALENVHOME}/bin/activate && ${PROJECTROOT}/manage.py runserver 0:8000
This is pretty much how it's done with Fabric, Ansible and any other automation tool.
edit: of course you'd be supplying the values for the variables VIRTUALENVHOME and PROJECTROOT