currently our process works, but it takes too much time due that the fronend Ember app needs to be build into every single environment we have ( 5 environments ). because we never know which environment will be available when we release it.
we intend to add even more environments because every developer should have his own working development environment. (because of the backend)
how we do it, is that we create a frontend build and a backend build which creates artifacts.
now the frontent build takes around 2 minutes for every environment.
ember build --env=test and ember build --env=acceptance and ember build --env=development ... and more
when the artifacts are created we then create the release picking the correct ones depending on which environment we release (this done via release pipeline).
my question is can we make a frontend ember build somehow not depending on the environment?
i would like to note that we are using azure service fabric.
I don't think there is anyway around multiple Ember builds because each one will be different (i.e. production vs. development).
You can batch together each build inside one CI build/build task and produce artifact(s) to be used in your release pipeline.
Run the following command once for each environment you have (assuming you are using Ember-CLI) sequentially in one build task.
ember build --environment={{YOUR-ENV-HERE}} --output-path="dist/{{YOUR-ENV-HERE}}/"
You can then either upload the entire dist/ folder as an artifact and scope each environment in your release pipeline to the corresponding artifact subdirectory, or you can upload each folder inside /dist as an individual artifact and scope each environment in your release pipeline to its corresponding artifact.
only the configuration it changes. basically the api endpoints
Related
I have a spring boot project[App-Server], for which I want to test.
I have created Mock Server docker image also hosted in AWS/Dockerhub for the same.
Also I have used Rest Assured for API Testing. For this also docker image is available in AWS/Dockerhub.
Now before creating docker image for App-Server, I want to perform integration testing where I want Dockerfile.test for App-Server to load and create docker image, then on jenkins I want first the App-Server docker image to load, then Mock-Server docker image to load and after that the Rest Assured to load and do the testing which can be done via mvn test. Once the test is successful, I want to create the final docker image for App-Server.
Can this be done via Jenkins or AWS.
tldr: You have to create docker images, deploy to test system, and run e.g. integration test, before creating final release version.
Detailed answer: I suggest you to get a closer look with you use-case at git branching model e.g. gitflow and CI/CD including containerization of an application.
Let's look it with the following scenario. Once you fixed e.g. a bug in release branch and pushed to git, your e.g. Release Jenkins job pulls it and build docker images with the version of e.g. release candidate v1.0.0-rc1. Then you must promote/deploy built release candidate version to your e.g. release reference system with mocking systems (e.g. you can use aws for this) as it is illustrated here, i.e. inner loop. You only created final release version of e.g. 1.0.0 when test are completed successfully and deploy to e.g. production system, i.e. Outer loop.
I am deploying a React app on AWS Elastic Beanstalk. I bundle the app using webpack. However, I'm slightly confused about what best practices are from the production build process. Should I build the app locally (with NODE_ENV=production) using webpack, and then just upload the resultant bundle.js file, along with all node_modules to the Elasticbeanstalk instance? Or, should I upload all the source files, and run webpack on the actual cloud AWS server during deployment?
You should never build for production locally (unless you're the only developer).
Ideally, you have a build process that gets triggered manually or automatically from a git commit which then builds your project for production for you.
By using a centralized build process, you can then be sure that all your builds are built the same way (e.g. same node version, same npm or yarn version).
Both approaches are not really good to be honest. Local building is not a best way to build anything you want to have on production. You might have packages locally that may have inpact on what you're building. Same applies to the OS your doing it on.
And, again, same applies to the building during deployment. As the name of 'deployments' stands for, it's deploying. Just placing your application setup on the server so it may serve as it is supposed to.
That's the point where all CI/CD comes in. Having those kinds of solutions guarantee that each build is done with the same steps and on the same solution stack. No difference between each build is desired, because it allows you to assume that any bug or a change comparing to the 'desing' is because of the code, not environment it was build within.
Assuming that you're the only developer here (because you're asking for such a thing), CI/CD might be definitive overkill here, so just create shell script with steps and use Docker as the environment for build, so it stays the same between each build. That's the closest to the CI/CD option you can get without a hassle.
We're using the hosted build agent on VSTS to build and release our ASP.NET Core code to Azure App service.
My question is: can we run WebPack to handle front-end tasks on this hosted build on VSTS or do we have to do it manually before checking the code into our repository?
Update:
I'm utilizing the new ASP.NET Core Build (Preview) template that's available on VSTS -- see below:
Here are the steps -- out of the box:
For VSTS we're working on an extension, currently it's in beta phase, you can ask for a share.
Check the VSTS marketplace.
Check this github repo.
Webpack is definitively not a first class citizen for VS2015 and VSTS. Streamlining webpack for CI/CD has been a real headache in my case, especially as webpack was introduced hastily to solve dreadful performance issues with a large monolithic SPA (ASP.NET 4.6, Kendo, 15,000 files, 2000 folders). To cut short, after trying many scenarios to make sure that freshly rebuilt bundles would end up in IIS and Azure webapp, I did a 2-pass build. The sequence of VSTS tasks is as follows: npm install global, npm install local, npm webpack install local, npm webpack install global, build pass 1, webpack, build pass 2, etc... This works with hosted and private agents, providing you supply the proper path for webpack as webpack is installed in a different location in host and in private (did not find a way to chose the webpack install location for consistency). I scorch everything before starting the build. Also need to do these in VS2015 solution : (1) unload "built" folder, and (2) Add Content Include="Built\StarStar" in project file. The "built" folder contains the bundles and should appear greyed, otherwise more bad surprises and instabilities to deal with...
Build-Pass #2 task in VSTS BUILD allows to collect the fresh bundles generated by Build-Pass #1 and includes them automatically in the package to be published.
Without a second build-pass, collecting the bundles and merging them in the zip package is a nightmare, especially when you have 15,000 files to unzip then rezip (300 ms per file!!). Did not find file-merging capability that I could readily use in VSTS.
I have my hears to the ground listening for someone coming up with a more efficient CI/CD scheme for webpack. In the meanwhile, my 2-pass-build workaround is working flawlessly, but slow indeed.
I anticipate that the advances with ASP.NET core, Angular 2 and webpack will look into solving this elegantly.
Hi I am Teamcity Developer.
Now i Have a task to move Teamcity instance to another server.
So far i moved the instance to another one but i am having a problem with running the builds.
You can see below the builds are in incompatible agents.
SNapshot of my builds in Build agent
Please can anyone suggest the way to get builds run on time.
Your JDK does not seems to be recognized.
You should ensure it has been properly installed on every targeted agent.
To visualize your environment settings, you can click on your agent, Agent Parameters and Environment Variable tab.
Inside the C:\BuildAgent\conf\buildAgent.properties file you can declare new variables, like :
env.JDK_18=C:\Program Files (x86)\Java\jdk1.8.xxx\
And, once it has been done, restart your TeamCity Agent Service.
I have configured TeamCity with Git to get my ASP.NET MVC project.
My solution contains the web app and the corresponding unit tests:
MY_SOLUTION.sln:
- WebAppProject
- SomeCoreLibrary
- SomeCoreLibraryTests
- OtherProjects...
The steps that I have configured in TeamCity are the following:
Get external packages using NuGet
Build the solution and deploy it
Run Unit Tests
Run Automated Tests (using Selenium)
I want to run the unit tests after building but before deployment and stop deployment if the unit tests failed. Currently the deployment is done after the build using the following Command Line Parameters:
/p:VisualStudioVersion=11.0
/p:DeployOnBuild=true I want this to be done only after SomeCoreLibraryTests.dll unit tests have passed
/p:PublishProfile=MyWebDeploy
/P:AllowUntrustedCertificate=True
/P:UserName=username_here
/P:Password=password_here
Thanks,
Ionut
What I've done in similar cases is to use RoboCopy to just mirror the new website into the deployment path. Doesn't that work for you?
P.S.: if you do get this working, I'd suggest doing a performance improvement change in TeamCity (which would allow you to run the unit tests in parallel to the automated tests):
I assume you are employing a single build configuration for all those steps. If that is the case, what I would recommend instead is using Dependent Build configurations to separate the different concerns. You can see an example here in an open source project of mine:
http://teamcity.codebetter.com/viewLog.html?buildId=112432&buildTypeId=bt1075&tab=dependencies
Log in as Guest and expand the Testeroids :: Publish to NuGet tree node to visualize the build flow.
To achieve this, basically you pass around the result of your build step in the artifacts (e.g. you pass the resulting binaries from Compile into Unit Test). You gain several things by using dependent builds: several independent build steps can run in parallel on different agents, plus if one of your build steps fails because of external factors (e.g. let's say Publish failed because the network went down), you can trigger again the build and it will only rebuild the failed steps.
I am not familiar with the tools that you use. However, I would, in general, use a few build configurations for a project:
build configuration, triggered on change, containing these steps: get the latest source code and packages, build/compile and unit test. Then create an artifact for deployment task.
build configuration to deploy to a development server, triggered by successful completion of and using artifact (via dependency) from (1).
build configuration for long running (eg integration/functional) testing that is scheduled to run less frequently.
An advantage of (2) is that you can, if necessary, re-deploy a build/artifact without having to rebuild the artifact first. Also, if you have multiple agents, (2) and (3) can run independently of each other.
Furthermore, you can also tag build in (2) that have passed development checks and then use its artifact in another build configuration to deploy it to test server, etc.