Golang deployment on AWS - amazon-web-services

I have some Go code I want to test locally, then deploy to AWS. Everything is fine as long as I stay in a single directory/package.
However, I want my folder structure to be similar to this:
$GOPATH/src/project/application.go
$GOPATH/src/project/lib1/libcode.go
Locally, to use the code in "lib1" I need to use import "project/lib1"
But when deployed to AWS the "project" folder doesn't exist of course, so instead I have to use import "./lib1"
I can probably solve this problem by either having all code in a single folder/package, or by changing my $GOROOT every time I change working on a project, but both these workarounds feel dirty and awkward.
What's the correct way to tackle this? Thanks.

The normal deployment mechanism for Go is to build the project, which produces a self-contained binary, and to deploy that binary - not the source code. A production machine running a Go application does not need the Go source files and does not need the Go toolchain installed. Go is not an interpreted language, it's not even a bytecode language with a VM (like Java with JVM or C# with CLR); it produces native binaries for the target OS and architecture. That's one of the reasons for its strong performance and efficient memory profile.

Related

Go cross-platform tests

Problem description
I've developed a CLI in Go. My dev environment is Linux. I wrote unit-tests and only produce releases of executable files when tests pass. When I test or use my tool in a Linux environment, everything works fine.
My CI/CD pipeline is built around goreleaser to produce multi-platform executables. Since my app doesn't use exotic cross-platform functionalities, I was quite confident that windows executable should work as expected. But it didn't.
Long story short, always normalize paths with filepath.ToSlash(). But this is not my question.
Question
Hence my question is: "since behavior might change on different platforms for such little mistakes, is it possible to run go test for a list of os/architecture ?" I can't imagine rebooting on windows to test every commit manually and I don't think discipline is the answer. If it was, we wouldn't test things at all.
Search attempts
A fast search on Google and Stack Overflow for "golang cross-platform tests" didn't return any results. Am I missing something or is my approach to this problem wrong ?
Fist edit
Most comments pointed out that the only way to test the behavior of an executable on a given platform is... to test it on this platform (in a multi-stage CI/CD for example). This is so obvious that there might not be a way to achieve it otherwise, I know.
But triggering a parallel CI/CD job on every platform for every commit (of partially untested code) doesn't sound satisfying to me. It IS the only way to know for sure that the code behave as expected on every targeted platform but I'm wondering if anyone stumbled on this issue and found a pre-CI/CD solution to this problem.
Though it might be the only way to get conclusive test results, it implies triggering CI/CD with parallel tests on each platform. I was looking for some solution on the developer machine, before committing untested code
You can install a local CICD tool which would, on (local) commit, trigger those tests.
A Local GitLab, for instance, can run test in multiple platforms simultaneously (since GitLab 11.5)
But that implies at least a Docker image in order to test on Windows from your Linux dev environment.
With Go alone however, as mentioned in "Design and unit-test cross-platform application"
It's not possible to run go test for a target system that's different from the current system.
If you are trying to test a different CPU architecture, you should be able to do it with QEMU. This blog post explains how.
If you are trying to test a different OS, you can probably do everything you need to do in Docker containers. You could use a tool like VSCode’s remote development toolkit to easily dev/build/test a project in a specific container, or you could write a custom Makefile that calls the appropriate Docker commands when you run make test (allowing you to run tests in multiple OSes with one command).

Advice for Web-based Remote Build System

I'm interested in setting up a remote build system at work, initially for internal use, potentially for some customers going forward. We need to compile library code on several different machines (PC, Mac) and with multiple compilers, and it can be a real pain trying to get access to a full set. This is not our main build system, which is Jenkins-based and uses an approach that is not easily modified for the purpose envisaged here.
The idea would be that you could post your source to a website with some basic build parameters, it would compile the code and you could then download the generated code. Ideally users could pick which version of the underlying software they compiled their libraries against. I envisage it being supported by a virtual machine.
Reason I'm posting is that I don't really want to roll-my-own as much as possible - longer term it has maintenance implications - and would prefer something as pre-existing as possible. Obviously one would expect some adaptation in terms of scripting.
Any suggestions? It would have to be supported on Mac and PC at absolute minimum.
This sounds like something you could do by creating a parameterized Jenkins job (the build params given as input to your web frontent could be passed on to the job, perhaps via the Jenkins API). Personally, I would see if you could skip the step of creating a new webfrontend, and have users pass their build params directly to Jenkins.
To support downloading the resulting compiled code, you could have the Jenkins job archive the build as an artifact. Users could then download the files from the result page for that individual build.
As for how to make a Jenkins job accept source code to compile as input, perhaps you could use branches in your CM system? Your users could push their code to a branch, and then pass the branch as a build param. Otherwise, you might be able to use the file parameter feature of Jenkins.

C++ build artifacts management

We are moving our build and testing system to Jenkins, and are looking for an easy way (where we don't have to code all the logic ourselves) to manage the build artifacts.
Basically we need an organised way to store them in order by the type of the build, the user who built it and such for example:
johnd/nightly/r543241/win32/program.zip
johnd/nightly/trunk/lin64/program.tar.gz
master/release/2.1/win32/program.zip
This way we can upload when a build is complete, and retrieve the required artifact in the testing stage with ease.
Until now we simply stored the files in directories on NFS, but recently started considering an artifact manager. I've looked at Artifcatory, Archiva and Nexus. But all seem very Java centered or at least required maven to work with. Since I don't want to introduce more complexity (We mainly work with python, scons is our build tool) and I don't want to introduce maven to the mix, I'm looking for something that has an easy command line (or better REST/Python interface) to upload, download, manage artifacts.
If you don't use an artifact manager, but use some other clever method to manage your C++ artifacts for release/test needs I'd be happy to hear that also.
I have exactly the same issue. I didn't succeed to integrate artifactory easilly, so I forgot it.
For the moment I use the copy artefact plugin to copy by scp the file to an NFS share. But this is not a good solution since shares may be mounted differently on two machines, and windows cannot access them directly.
But a real artifact manager would be so such a good thing for our project.
In my previous job, I was using a documentation CMS (LiveLink) which provided good services:
- self organising folder.
- persistant url (we can move, rename folders and files, a public URL to a given files never changes, that was SOOO great : "http://server/go/123456789123456789" always pointed to the same file. So you just had to give this url to wget and you get the file
- file versionning.
- meta data, advanced search
I'm still searching such a solution in opensource software and don't find it.
In you just need to share artefact between two jobs, simple use the "copy artifact for another build" plugin. I use it to split my job into a a build job and a test job (actually there are 3 test jobs, one very fast done after each commit, one quite slow done every day, and one very slow done each week end).

How to make XCode 4 build and run code automatically when changes are saved?

I am using XCode for test driven development in C++.
It occurred to me that I would save a lot of time if XCode could automatically build and run my tests every time I save.
Is there any way to do this (by scripting XCode or otherwise)? Google doesn't seem to have a clue.
I have seen this workflow when using interpreted languages and it really does increase productivity.
Let's assume that my machine is fast enough to build and run tests in a few seconds.
If you're targeting C++, then you're probably out of luck.
With Objective-C, there's a project called «Injection»:
http://injectionforxcode.com/
It tracks the changes to your project files, and when a change occurs, it re-build the files as categories, placed inside a bundle.
The bundle is then loaded dynamically into the running app, and the contents from the categories replace the running code.
But it's Objective-C. C++ does not have such a runtime and capabilities.
Anyway, you may want to take a look at it... : )
automatically? no. you could write your own fsevent monitor agent. when a change occurs that requires a rebuild, do something appropriate.
the easy way around this: you can configure xcode to save when building. you don't need save explicitly, just hit run with this preference enabled. in that sense, hitting run is as simple as hitting save, and that performs a save, build and run in the correct order when you hit run. You may want an intermediate target or scheme for this.
another option would be to use a vc commit as a trigger for a build and run of your tests (saw your comment: use branches).
No, I don't think this can be done.
Most projects don't build and test in the fraction of a second it would require for it to be practical to do on every save anyway (i.e., whenever Xcode autosaves).
A lot of work has gone into the infrastructure for just getting Xcode's live errors and warnings. As long as your project isn't too weird those live errors ought to give a pretty good proxy for actually building it anyway.
For testing you might want to look into continuous integration if you don't already use it.
Grey-beards that grew up before autosave may have developed the habit of occasionally using a key command to save manually. Such users may be able to change that habit by substituting the key command that runs the tests for the key command they use to manually save.

Why should one use a build system over that which is included as part of an IDE?

I've heard more than one person say that if your build process is clicking the build button, than your build process is broken. Frequently this is accompanied with advice to use things like make, cmake, nmake, MSBuild, etc. What exactly do these tools offer that justifies manually maintaining a separate configuration file?
EDIT: I'm most interested in answers that would apply to a single developer working on a ~20k line C++ project, but I'm interested in the general case as well.
EDIT2: It doesn't look like there's one good answer to this question, so I've gone ahead and made it CW. In response to those talking about Continuous Integration, yes, I understand completely when you have many developers on a project having CI is nice. However, that's an advantage of CI, not of maintaining separate build scripts. They are orthogonal: For example, Team Foundation Build is a CI solution that uses Visual Studio's project files as it's configuration.
Aside from continuous integration needs which everyone else has already addressed, you may also simply want to automate some other aspects of your build process. Maybe it's something as simple as incrementing a version number on a production build, or running your unit tests, or resetting and verifying your test environment, or running FxCop or a custom script that automates a code review for corporate standards compliance. A build script is just a way to automate something in addition to your simple code compile. However, most of these sorts of things can also be accomplished via pre-compile/post-compile actions that nearly every modern IDE allows you to set up.
Truthfully, unless you have lots of developers committing to your source control system, or have lots of systems or applications relying on shared libraries and need to do CI, using a build script is probably overkill compared to simpler alternatives. But if you are in one of those aforementioned situations, a dedicated build server that pulls from source control and does automated builds should be an essential part of your team's arsenal, and the easiest way to set one up is to use make, MSBuild, Ant, etc.
One reason for using a build system that I'm surprised nobody else has mentioned is flexibility. In the past, I also used my IDE's built-in build system to compile my code. I ran into a big problem, however, when the IDE I was using was discontinued. My ability to compile my code was tied to my IDE, so I was forced to re-do my entire build system. The second time around, though, I didn't make the same mistake. I implemented my build system via makefiles so that I could switch compilers and IDEs at will without needing to re-implement the build system yet again.
I encountered a similar problem at work. We had an in-house utility that was built as a Visual Studio project. It's a fairly simple utility and hasn't needed updating for years, but we recently found a rare bug that needed fixing. To our dismay, we found out that the utility was built using a version of Visual Studio that was 5-6 versions older than what we currently have. The new VS wouldn't read the old-version project file correctly, and we had to re-create the project from scratch. Even though we were still using the same IDE, version differences broke our build system.
When you use a separate build system, you are completely in control of it. Changing IDEs or versions of IDEs won't break anything. If your build system is based on an open-source tool like make, you also don't have to worry about your build tools being discontinued or abandoned because you can always re-build them from source (plus fix bugs) if needed. Relying on your IDE's build system introduces a single point of failure (especially on platforms like Visual Studio that also integrate the compiler), and in my mind that's been enough of a reason for me to separate my build system and IDE.
On a more philosophical level, I'm a firm believer that it's not a good thing to automate away something that you don't understand. It's good to use automation to make yourself more productive, but only if you have a firm understanding of what's going on under the hood (so that you're not stuck when the automation breaks, if for no other reason). I used my IDE's built-in build system when I first started programming because it was easy and automatic. I later started to become more aware that I didn't really understand what was happening when I clicked the "compile" button. I did a little reading and started to put together a simple build script from scratch, comparing my output to that of the IDE's build system. After a while I realized that I now had the power to do all sorts of things that were difficult or impossible through the IDE. Customizing the compiler's command-line options beyond what the IDE provided, I was able to produce a smaller, slightly faster output. More importantly, I became a better programmer by having real knowledge of the entire development process from writing code all the way down through the generation of machine language. Understanding and controlling the entire end-to-end process allows me to optimize and customize all of it to the needs of whatever project I'm currently working on.
If you have a hands-off, continuous integration build process it's going to be driven by an Ant or make-style script. Your CI process will check the code out of version control when changes are detected onto a separate build machine, compile, test, package, deploy, and create a summary report.
Let's say you have 5 people working on the same set of code. Each of of those 5 people are making updates to the same set of files. Now you may click the build button and you know that you're code works, but what about when you integrate it with everyone else. The only you'll know is that if you get everyone else's and try. This is easy every once in a while, but it quickly becomes tiresome to do this over and over again.
With a build server that does it automatically, it checks if the code compiles for everyone all the time. Everyone always knows if the something is wrong with the build, and what the problem is, and no one has to do any work to figure it out. Small things add up, it may take a couple of minutes to pull down the latest code and try and compile it, but doing that 10-20 times a day quickly becomes a waste of time, especially if you have multiple people doing it. Sure you can get by without it, but it is so much easier to let an automated process do the same thing over and over again, then having a real person do it.
Here's another cool thing too. Our process is setup to test all the sql scripts as well. Can't do that with pressing the build button. It reloads snapshots of all the databases it needs to apply patches to and runs them to make sure that they all work, and run in the order they are supposed to. The build server is also smart enough to run all the unit tests/automation tests and return the results. Making sure it can compile is fine, but with an automation server, it can handle many many steps automatically that would take a person maybe an hour to do.
Taking this a step further, if you have an automated deployment process along with the build server, the deployment is automatic. Anyone who can press a button to run the process and deploy can move code to qa or production. This means that a programmer doesn't have to spend time doing it manually, which is error prone. When we didn't have the process, it was always a crap shoot as to whether or not everything would be installed correctly, and generally it was a network admin or a programmer who had to do it, because they had to know how to configure IIS and move the files. Now even our most junior qa person can refresh the server, because all they need to know is what button to push.
the IDE build systems I've used are all usable from things like Automated Build / CI tools so there is no need to have a separate build script as such.
However on top of that build system you need to automate testing, versioning, source control tagging, and deployment (and anything else you need to release your product).
So you create scripts that extend your IDE build and do the extras.
One practical reason why IDE-managed build descriptions are not always ideal has to do with version control and the need to integrate with changes made by other developers (ie. merge).
If your IDE uses a single flat file, it can be very hard (if not impossible) to merge two project files into one. It may be using a text-based format, like XML, but XML it notoriously hard with standard diff/merge tools. Just the fact that people are using a GUI to make edits makes it more likely that you end up with unnecessary changes in the project files.
With distributed, smaller build scripts (CMake files, Makefiles, etc.), it can be easier to reconcile changes to project structure just like you would merge two source files. Some people prefer IDE project generation (using CMake, for example) for this reason, even if everyone is working with the same tools on the same platform.