Does a Jenkins Master need Perforce installed - build

We have a Jenkins master/slave configuration and Perforce is installed on all masters. We just had an unrelated incident occur that made us renew our .p4tickets on all masters and slaves and we came to find that perforce had been removed by someone on our team about a week ago without telling anyone.
Our jobs are setup to wipe completely new the workspaces on the slaves every time a build occurs so that we can issue a p4 sync every time. We build several times a day. Perforce is installed both on the masters and the slaves.
The problem is that the master that had Perforce missing has been doing builds successfully for a week now.
I have been operating under the thought that with the architecture we have, Perforce is doing a push from the Master to the Slave since the jobs are kept on the Master. Is this incorrect?
Regards,
-Caolan.

You don't need the Perforce client on the Jenkins master unless it's set up to run builds that need to pull code from Perforce. If all your builds run on slaves, you don't need Perforce on the master.

If you are using the new p4 plugin you don't need to install any p4 clients on the Master or the Slave. The p4 plugin uses a native p4java API to talk directly to the Perforce Server.

Related

Puppet agents aren't applying changes from PuppetMaster

We have a deployment in AWS where we have a single PuppetMaster box that services hundreds of other servers within the AWS ecosystem. Starting yesterday, we noticed that puppet changes were not applying to the agents. At first we thought it was only newly provisioned boxes, but now we see that we simply aren't getting any error message on any of the machines where puppet agent runs.
# puppet agent --test --verbose
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for blarg-follower-0e5385bace7e84fe2
Info: Applying configuration version '1529498155'
Notice: Finished catalog run in 0.24 seconds
I have access to the PuppetMaster and have validated that the code there is up to date. Need help figuring out how to get better logging out of this and debugging what is wrong between the agent and the puppet master.
In this case the issue was that our Puppet Master's /etc/puppet/puppet.conf file had been modified, and indeed the agents weren't getting the full catalog from Puppet Master. We found a backup copy of the file, restored it, and we were back in business.

Hyperledger Fabric v0.6, how to add more peer on the network without vagrant and docker?

I have installed Fabric v0.6 on my VM ubuntu 16.04 and it works fine for peer, membersrvc, API, chaincode. I'm doing everything the same with this VM as with another VM on the same server. Let's call them VM1 and VM2. I don't use vagrant or docker.
My project is connecting 4 peers together. I got some advice from different users. Some said that without using vagrant or pulling the image from docker, it will not work. Some said that it works, but my project does not with any VR environment. It's real implementation.
They said that I need to modify some lines in Core.Yaml and membersrvc.yaml to let the peers discover each other and send acknowledgment message. Some said that I have to do the above step and port mapping in different servers to make it work.
When I try to edit both files, Sometime I don't understand about it at all. And some tips has no operation and step.
Thx for helping us to solve the problem.
Hyperledger Fabric has come a long way since the v0.6 release. If you are still working with v0.6 it is highly recommended that you upgrade to the most current release. There are a number of on-going efforts to develop supported approaches to deploying Hyperledger Fabric to multiple nodes, and for v1.0 and beyond, our deployments are all Docker based and can support kubernetes and swarm.

Deploy Docker from Travis to AWS (or any other SSH able server)

My deployment process is missing one important piece. To push the code up to the server.
I'm banging my head whether to:
1 - Create/Build the Docker image on Travis and then somehow push it to AWS
OR
2 - Try to ssh (from travis script) into my AWS and run a command set there, including the Docker image build and initialization.
I'm definitively in doubt and I see problems in both solutions proposed above. What would be the usual mechanism here?
I will answer myself here:
1 is not a good idea, it's definitively wrong. If you build the Docker container on the Travis side, and (think you can) transfer it to AWS, you'd be overkilling the network and your deployment could take ages.
2 is more or less the way to follow (there could be others). In my case I ssh'd from travis into my server and executed a set of commands remotely, among them docker build, docker run.
It's good to mention that whatever command you remotely execute in your server, the output is beautifully transferred into your Travis output.

Configuring AmazonLinux AMI instances

I am trying to setup an AMI such that, when booted it will auto configure itself with a defined "configuration" somewhere on a server. I came across Chef and Puppet. Considering Puppet, I was able to run though their examples but couldn't see one for auto configuration from master. I found out that Puppet Enterprise is not supported on "Amazon Linux". Team chose Amazon Linux and would like keep that instead of going to other OS just because one tool doesn't support it. Can someone please give me some idea about how I could achieve this? (I am trying to stay away from home grown shell scripts over a good industry adopted tool for maintainability)
What I have done in the past is to copy /etc/rc.local to /etc/rc.local.orig, and then configure /etc/rc.local to kick off a puppet run and then pave over itself.
/etc/rc.local:
#!/bin/bash
##
#add pre-puppeting stuff here, I add the hostname in "User-data" when creating the VM so I can set the hostname before checking in
##
/usr/bin/puppet agent --test
/bin/cp -f /etc/rc.local.orig /etc/rc.local
/sbin/init 6
AWS CloudFormation is one of Amazon's recommended ways to provision servers (and other cloud resources, too). You declare all the resources you need in a JSON file, and specify how to provision each server by declaring packages to install, services to run, files to create, and commands to run when the server is created. See the user guide for more information. I also wrote a couple of blog posts about getting started with it.

How to Perform remote build in Jenkins

I am new to Jenkins. Please help me with my requirement.
I'm running Jenkins in Windows environment. I have a development box where Jenkins is running successfully. Now, I have to do a build in another windows machine (say QA box) from the dev box. Can anyone suggest me please how to do this?
Solution is quite simple.
Step 1: Create and configure the slave node (QA BOX) with Jenkins.
Goto Manage Jenkins
Click on Manage Nodes
New Node Configuration
Step 2: There may be several ways to complete this task.
Configure the jobs according to the new machine (IP, Ports or any other dependencies). A good practice is keeping the build scripts separate for machine or keeping the separate properties files for different machines.
Configuer Jobs According to the new slave configuration.
Keep in mind any dependency over File Structure, IPs and Ports.
Step 3: Run the jobs and debug for any dependencies regarding the machine.
If you encounter any trouble. Go through the logs and find the related problem.
Create a test node for your QA BOX
Configure a Job to:
Update the latest code to the remote test node, example SVN
Configure the build setting for the remote test node build, example using ANT
Done