How to user fabric on localhost without password? - fabric

I would like to use fabric to deploy a bunch of libraries on to a box that it being automatically provisioned with salt-stack. Currently the function is run as
fab deploy_libs:args,kwards -Hlocalhost
How can I run this command without requiring a password, which obviously won't be a thing I can do as it's automated?
Disclaimer: There may be a significantly better way of doing this and I'm not precious about it.

To run a command on the local machine from inside fabfile.py, you can use fabric.operations.local.
Since you're using salt, you might also consider doing the extra work to use the cmd.run state, which, once you've set up the correct dependencies, lets you take advantage of salt's dependency graph and run some operations in parallel.

Related

Go cross-platform tests

Problem description
I've developed a CLI in Go. My dev environment is Linux. I wrote unit-tests and only produce releases of executable files when tests pass. When I test or use my tool in a Linux environment, everything works fine.
My CI/CD pipeline is built around goreleaser to produce multi-platform executables. Since my app doesn't use exotic cross-platform functionalities, I was quite confident that windows executable should work as expected. But it didn't.
Long story short, always normalize paths with filepath.ToSlash(). But this is not my question.
Question
Hence my question is: "since behavior might change on different platforms for such little mistakes, is it possible to run go test for a list of os/architecture ?" I can't imagine rebooting on windows to test every commit manually and I don't think discipline is the answer. If it was, we wouldn't test things at all.
Search attempts
A fast search on Google and Stack Overflow for "golang cross-platform tests" didn't return any results. Am I missing something or is my approach to this problem wrong ?
Fist edit
Most comments pointed out that the only way to test the behavior of an executable on a given platform is... to test it on this platform (in a multi-stage CI/CD for example). This is so obvious that there might not be a way to achieve it otherwise, I know.
But triggering a parallel CI/CD job on every platform for every commit (of partially untested code) doesn't sound satisfying to me. It IS the only way to know for sure that the code behave as expected on every targeted platform but I'm wondering if anyone stumbled on this issue and found a pre-CI/CD solution to this problem.
Though it might be the only way to get conclusive test results, it implies triggering CI/CD with parallel tests on each platform. I was looking for some solution on the developer machine, before committing untested code
You can install a local CICD tool which would, on (local) commit, trigger those tests.
A Local GitLab, for instance, can run test in multiple platforms simultaneously (since GitLab 11.5)
But that implies at least a Docker image in order to test on Windows from your Linux dev environment.
With Go alone however, as mentioned in "Design and unit-test cross-platform application"
It's not possible to run go test for a target system that's different from the current system.
If you are trying to test a different CPU architecture, you should be able to do it with QEMU. This blog post explains how.
If you are trying to test a different OS, you can probably do everything you need to do in Docker containers. You could use a tool like VSCode’s remote development toolkit to easily dev/build/test a project in a specific container, or you could write a custom Makefile that calls the appropriate Docker commands when you run make test (allowing you to run tests in multiple OSes with one command).

Advice for Web-based Remote Build System

I'm interested in setting up a remote build system at work, initially for internal use, potentially for some customers going forward. We need to compile library code on several different machines (PC, Mac) and with multiple compilers, and it can be a real pain trying to get access to a full set. This is not our main build system, which is Jenkins-based and uses an approach that is not easily modified for the purpose envisaged here.
The idea would be that you could post your source to a website with some basic build parameters, it would compile the code and you could then download the generated code. Ideally users could pick which version of the underlying software they compiled their libraries against. I envisage it being supported by a virtual machine.
Reason I'm posting is that I don't really want to roll-my-own as much as possible - longer term it has maintenance implications - and would prefer something as pre-existing as possible. Obviously one would expect some adaptation in terms of scripting.
Any suggestions? It would have to be supported on Mac and PC at absolute minimum.
This sounds like something you could do by creating a parameterized Jenkins job (the build params given as input to your web frontent could be passed on to the job, perhaps via the Jenkins API). Personally, I would see if you could skip the step of creating a new webfrontend, and have users pass their build params directly to Jenkins.
To support downloading the resulting compiled code, you could have the Jenkins job archive the build as an artifact. Users could then download the files from the result page for that individual build.
As for how to make a Jenkins job accept source code to compile as input, perhaps you could use branches in your CM system? Your users could push their code to a branch, and then pass the branch as a build param. Otherwise, you might be able to use the file parameter feature of Jenkins.

how to run lamp in env

TL;DR
How do I encapsulate my Apache/PHP/MySQL/Wordpress installation in an environment or sandbox?
In my absence the designers-that-be have decreed that we use Wordpress for our newest project, as opposed to the Django-type applications I am familiar with. Another developer has already started making a custom theme, but now he's gone and I'm supposed to take over his work but I have no experience whatsoever with LAMP or Wordpress.
I thought it would be a good start to encapsulate the project into a proper standalone repository that I can copy and share between machines and people, put on github, etc, and most importantly: it needs to run in a virtual-env instance (or equivalent), so it doesn't mess up anybody's other projects, we can run multiple versions, etc.
Can anybody help me with that? At the moment it looks like the required files are spread out over my whole computer, I keep having to touch stuff in my root-directory, I need sudo simply to start and stop the *##+ing server, it's complete lunacy.
Thank you!
Take a look at virtPHP, phpenv and php-build for creating isolated environments. As for typing sudo every time you need to start or stop the server, this is the right way of doing things.

How to make XCode 4 build and run code automatically when changes are saved?

I am using XCode for test driven development in C++.
It occurred to me that I would save a lot of time if XCode could automatically build and run my tests every time I save.
Is there any way to do this (by scripting XCode or otherwise)? Google doesn't seem to have a clue.
I have seen this workflow when using interpreted languages and it really does increase productivity.
Let's assume that my machine is fast enough to build and run tests in a few seconds.
If you're targeting C++, then you're probably out of luck.
With Objective-C, there's a project called «Injection»:
http://injectionforxcode.com/
It tracks the changes to your project files, and when a change occurs, it re-build the files as categories, placed inside a bundle.
The bundle is then loaded dynamically into the running app, and the contents from the categories replace the running code.
But it's Objective-C. C++ does not have such a runtime and capabilities.
Anyway, you may want to take a look at it... : )
automatically? no. you could write your own fsevent monitor agent. when a change occurs that requires a rebuild, do something appropriate.
the easy way around this: you can configure xcode to save when building. you don't need save explicitly, just hit run with this preference enabled. in that sense, hitting run is as simple as hitting save, and that performs a save, build and run in the correct order when you hit run. You may want an intermediate target or scheme for this.
another option would be to use a vc commit as a trigger for a build and run of your tests (saw your comment: use branches).
No, I don't think this can be done.
Most projects don't build and test in the fraction of a second it would require for it to be practical to do on every save anyway (i.e., whenever Xcode autosaves).
A lot of work has gone into the infrastructure for just getting Xcode's live errors and warnings. As long as your project isn't too weird those live errors ought to give a pretty good proxy for actually building it anyway.
For testing you might want to look into continuous integration if you don't already use it.
Grey-beards that grew up before autosave may have developed the habit of occasionally using a key command to save manually. Such users may be able to change that habit by substituting the key command that runs the tests for the key command they use to manually save.

Best Practices for Code/Web Application Deployment?

I would love to hear ideas on how to best move code from development server to production server.
A list of gotcha's, don't do this list would be helpful.
Any tools to help automate the steps of.
Make backups of existing code, given these list of files
Record the Deployment of these files from dev to production
Allow easier rollback if deployment or app fails in any way...
I have never worked at a company that had a deployment process, other than a very manual, ftp files from dev to production.
What have you done in your companies, departments, etc?
Thank you...
Yes, I am a coldfusion programmer, but files are files, and this should be language agnostic question.
OK, I'll bite. There's the technology aspect of this problem, which other answers have already covered. But the real issue is a process problem. Where the real focus should be ensuring a meaningful software development life cycle (SDLC) - planning, development, validation, and deployment. I'll cover each in turn. What you want is a repeatable activity at each phase.
Planning
Articulating and recording what's to be delivered. Often tickets or user stories are enough. Sometimes you do more, like a written requirements document, that a customer signs off on, that's translated into various artifacts such as written use cases - ultimately what you want though is something recorded in an electronic system where you can associate changes to code with it. Which leads me to...
Development
Remember that electronic system? Good. Now when you make changes to code (you're committing to source control right?) you associate those change with something in this electronic system - typically tickets. I like Trac, but have also heard good things about Atlassian's suite. This gives you traceability. So you can assert what's been done and how. Then you can use this system and source control to create a build - all the bits needed for whatever's changed - and tag that build in source control - that's your list of what's changed. Even better, have a build contain everything, so that it's standalone entity that can easily be deployed on it's own. The build is then delivered for...
Validation
Perhaps the most important step that many shops ignore - at their own peril. Defects found in production are exponentially more expensive to fix then when they're discovered earlier in the process. And validation is often the only step where this occurs in many shops - so make sure yours does it.
This should not be done by the programmer! That's like the fox watching the hen house. And whoever is doing is should be following some sort of plan. We use Test Link. This means each build is validated the same way, so you can identify regression bugs. And, this build should be deployed in the same way as you would into production.
If all goes well (we usually need a minimum of 3 builds) the build is validated. And this goes to...
Deployment
This should be a non-event, because you're taking a validated build following the same steps as you did in testing. Could be first it hits a staging server, where there's an automated copying process, but the point being is that is shouldn't be an issue at this point, because you validated with the same process.
Conclusion
In terms of knowing what's where, what you really want is a logical way to group changes together. This is where the idea of a build comes in. It's really the unit that should segue between steps in the SDLC. If you already have that, then the ability to understand the state of a given system becomes trivial.
Check out Ant or Maven - these are build and deployment tools used in the Java world which can help you copy / ftp files, backup and even check out code from SVN.
You can automate your deployment steps using these tools, for example Ant will allow you declare a set of tasks as part of your deployment. So you could, for example:
Check out a revision using SVNAnt or similar to a directory
Copy (and perhaps zip first) these files to a backup directory
FTP all the files to your web server(s)
Create a report to email to the team illustrating the deployment
Really you can do almost anything you wish to put time into using Ant. Maven is a little more strucutred (and newer) and you can see a discussion of the differences here.
Hope that helps!
In a nutshell...
You should start with some source control solution - probably Subversion or Git. Once that's in place you can create a script that generates a clean build of your source code and deploys it to your production server(s).
You could do this with a simple batch script or use something like Ant for more control. Here is a simple example of a batch file using Subversion:
svn copy svn://path/to/your/project/trunk -r HEAD svn://path/to/your/project/tags/%version%
svn checkout svn://path/to/your/project/trunk -r HEAD //path/to/target/directory
Ant makes it easy to do things like automatically run unit tests and sync directories. For example:
<sync todir="//path/to/target/directory" includeEmptyDirs="true" overwrite="true">
<fileset dir="${basedir}">
<exclude name="**/*.svn"/>
<exclude name="**/test/"/>
</fileset>
</sync>
This is really just a starting point. A next step might be a continuous integration solution like Hudson. I would also recommend reading "Pragmatic Project Automation: How to Build, Deploy, and Monitor Java Applications".
One ColdFusion specific gotcha is to make sure you clear the Application scope when required (to update any singleton components). A common approach here is to use a URL parameter that causes onRequestStart() to call onApplicationStart(). You may also have to clear the trusted cache.
We use a system called AnthillPro: http://www.anthillpro.com
It's commercial software, but it allows us to completely automate our deployment process across multiple servers and operating systems (We currently use it for both ColdFusion and Java, but it can be used for most languages. It has a ton of 3rd party integrations:
http://www.anthillpro.com/html/products/anthillpro/tool-integrations.html