How can you find out Azure-pipeline image content? - c++

I'm new to Azure-Pipeline and struggling to put together a C++ oriented pipeline that uses camke which properly compiles, run tests and build documentation on Ubuntu, macOS, and Windows.
I managed the macOS and Ubuntu cases rather easily but am struggling with the Windows case not knowing what's installed and what's in system PATH for the given image & container I've selected.
Not being super familiar with the Azure-Platform I'm basically relying on commit-push-run-pipeline every single little change to my YAML file thus wasting time and resources.
I can't imagine that the only way is to blindly try out commands by commit, push and run the pipeline.
I managed to find a basic description of the currently (hopefully) available images here following the included software link for Windows link yoou end up on a comprehensive list of what's supposedly installed (I have some doubts on whether this documentation actually matches the content of the image). Calling some of those tools like cmake and choco, present in the above list, failed. Whether or not they're actually installed and in system PATH I have no idea.
Q1: Is there any way to locally test out an Azure-Pipeline YAML?
Q2: Is there any way to figure what is actually installed on a given image/container (without issuing a DIR /s from the root folder??)
Q3: Is it possible to connect to a running container (or is it a VM???) instance and directly tinker with it?
Q4: Alternatively, is it possible to run such an image locally (Docker)? Does it imply execution on a Windows machine or is that a standalone VM image?
EDIT: Found out about this question, although doesn't quite answer mine: Is there a tool to validate an Azure DevOps Pipeline locally?

Q1: Is there any way to locally test out an Azure-Pipeline YAML?
The answer is yes. You could create your private agent to execute the Azure-Pipeline YAML.
Self-hosted agents
Q2: Is there any way to figure what is actually installed on a given
image/container (without issuing a DIR /s from the root folder??)
Just as you know, we could check the document Software for the software installed on the agent. If you want to know the install the path of some software, you could check the debug log from the build task. For example, cmake. We could check the build log from the cmake task:
Q3: Is it possible to connect to a running container (or is it a
VM???) instance and directly tinker with it?
For the hosted agent, I am afraid the answer is not.
Q4: Alternatively, is it possible to run such an image locally
(Docker)? Does it imply execution on a Windows machine or is that a
standalone VM image?
The answer is yes, we could Run a self-hosted agent in Docker. And it imply execution on a Windows machine.

Related

boost::filesystem not working in Google Cloud Run (using gVisor)

I've created a docker container (ubuntu:focal) with a C++ application that is using boost::filesystem (v1.76.0) to create some directories while processing data. It works if I run the container locally, but it fails when deployed to Cloud Run.
A simple statement like
boost::filesystem::exists(boost::filesystem::current_path())
fails with "Invalid argument '/current/path/here'". It doesn't work in this C++ application, but from a Python app running equivalent statements, it does work.
Reading the docs I can see Cloud Run is using gVisor and not all the system calls are fully supported (link: https://gvisor.dev/docs/user_guide/compatibility/linux/amd64/), nevertheless I would expect simple calls to work: check if a directory exists, create a directory, remove,...
Maybe I'm doing something wrong when deploying my container. Is there any way to work around it? Any boost configuration I can use to prevent it from using some syscalls?
Thanks for your help!

How Docker and Ansible fit together to implement Continuous Delivery/Continuous Deployment

I'm new to the configuration management and deployment tools. I have to implement a Continuous Delivery/Continuous Deployment tool for one of the most interesting projects I've ever put my hands on.
First of all, individually, I'm comfortable with AWS, I know what Ansible is, the logic behind it and its purpose. I do not have same level of understanding of Docker but I got the idea. I went through a lot of Internet resources, but I can't get the the big picture.
What I've been struggling is how they fit together. Using Ansible, I can manage my Infrastructure as Code; building EC2 instances, installing packages... I can even deploy a full application by pulling its code, modify config files and start web server. Docker is, itself, a tool that packages an application and ensures that it can be run wherever you deploy it.
My problems are:
How does Docker (or Ansible and Docker) extend the Continuous Integration process!?
Suppose we have a source code repository, the team members finish working on a feature and they push their work. Jenkins detects this, runs all the acceptance/unit/integration test suites and if they all passed, it declares it as a stable build. How Docker fits here? I mean when the team pushes their work, does Jenkins have to pull the Docker file source coded within the app, build the image of the application, start the container and run all the tests against it or it runs the tests the classic way and if all is good then it builds the Docker image from the Docker file and saves it in a private place?
Should Jenkins tag the final image using x.y.z for example!?
Docker containers configuration :
Suppose we have an image built by Jenkins stored somewhere, how to handle deploying the same image into different environments, and even, different configurations parameters ( Vhosts config, DB hosts, Queues URLs, S3 endpoints, etc...) What is the most flexible way to deal with this issue without breaking Docker principles? Are these configurations backed in the image when it gets build or when the container based on it is started, if so how are they injected?
Ansible and Docker:
Ansible provides a Docker module to manage Docker containers. Assuming I solved the problems mentioned above, when I want to deploy a new version x.t.z of my app, I tell Ansible to pull that image from where it was stored on, start the app container, so how to inject the configuration settings!? Does Ansible have to log in the Docker image, before it's running ( this sounds insane to me ) and use its Jinja2 templates the same way with a classic host!? If not, how is this handled?!
Excuse me if it was a long question or if I misspelled something, but this is my thinking out loud. I'm blocked for the past two weeks and I can't figure out the correct workflow. I want this to be a reference for future readers.
Please, it would very helpful to read your experiences and solutions because this looks like a common workflow.
I would like to answer in parts
How does Docker (or Ansible and Docker) extend the Continuous Integration process!?
Since docker images same everywhere, you use your docker images as if they are production images. Therefore, when somebody committed a code, you build your docker image. You run tests against it. When all tests pass, you tag that image accordingly. Since docker is fast, this is a feasible workflow.
Also docker changes are incremental; therefore, your images will have minimal impact on storage. Also when your tests fail, you may also choose to save that image too. In this way, developer will pull that image and investigate easily why your tests failed. Developer may choose to run tests in their machine too since docker images in jenkins and their machine are not different.
What this brings that all developers will have same environment, same version of all software since you decide which one will be used in docker images. I have come across to bugs that are due to differences between developer machines. For example in the same operating system, unicode settings may affect your code. But in docker images all developers will test against same settings, same version software.
Docker containers configuration :
If you are using a private repository, and you should use one, then configuration changes will not affect hard disk space much. Therefore except security configurations, such as db passwords, you can apply configuration changes to docker images(Baking the Configuration into the Container). Then you can use ansible to apply not-stored configurations to deployed images before/after startup using environment variables or Docker Volumes.
https://dantehranian.wordpress.com/2015/03/25/how-should-i-get-application-configuration-into-my-docker-containers/
Does Ansible have to log in the Docker image, before it's running (
this sounds insane to me ) and use its Jinja2 templates the same way
with a classic host!? If not, how is this handled?!
No, ansible will not log in the Docker image, but ansible with Jinja2 templates can be used to change dockerfile. You can change dockerfile with templates and can inject your configuration to different files. Tag your files accordingly and you have configured images to spin up.
Regarding your question about handling multiple environment configurations using the same Docker image, I have been planning on using a Service Discovery tool like Consul as a centralized config/property management tool. So, when you start your container up, you set an ENV var that tells it what application it is (appID), and what environment config it should use (ex: MyApplication:Dev) and it will pull its config from Consul at startup. I still have to investigate the security around Consul (as if we are storing DB connection credentials in there for example, how do we restrict who can query/update those values). I don't want to just use this for containers, but all apps in general. Another cool capability is to change the config value in Consul and have a hook back into your app to apply the changes immediately (maybe like a REST endpoint on your app to push changes down to and dynamically apply it). Of course your app has to be written to support this!
You might be interested in checking out Martin Fowler's blog articles on immutable infrastructure and on Phoenix servers.
Although not a complete solution, I have suggestions for two of your issues. Although they might not be perfect, these are the practices we are using in our workflow, and prove themselves so far.
Defining different environments - supposing you've written a different Ansible role for each environment you launch, we define an environment variable setting the environment we wish the container to belong to. We then download the suitable configuration file from an S3 bucket using the env variable set before into the container (which should be possible if you supply AWS creds or give your server an IAM role) and inject these parameters into the code when building it.
Ansible doesn't need to log into the docker app, but the solution is a bit tricky. I've tried two ways of tackling this problem, and both aren't ideal. The first one is to download the configuration file as part of the docker image command line, and build the app on container startup. While this solution works - it breaches the Docker philosophy and makes the image highly prone to build errors.
Another solution is pushing several images to your docker hub repo, and then pulling the appropriate image according to the environment at hand.
In a broader stroke, I've tried launching our app completely with Ansible and it was hell, many configuration steps are tricky and get trickier when you try to implement them as a playbook. When I switched to maintaining the severs alone with Ansible, and deploying the app itself with Docker things got a lot easier.

Want to make a Job on Hudson C/C++

Hopefully I find here someone who has experience with Hudson and its functions.
Now . I have Hudson installed this did not reveal any problems. But now I want to create a new job and that I'm developing in C / C + +.
In addition, I am working on Subversion svn where I run on the first error. Hudson did not find my svn . He says that I need an authentication . As I learned I can at Hudson authenticate but that does not work .
Maybe one of you knows how to create a project.
The things should be done in the job of Hudson.
Hudson is on my computer (local ) delete my project.
Then Hudson to access my SVN and check out the project from there.
The whole is now compiling Hudson . ( The best would be a compiler for C / C + + for Visual Studio 2008 compiler ) . The compiler then creates a * . Exe file.
Now Hudson to start the project on the basis of the *. Exe file and run the program .
Last but not least is to Hudson case of an error or if it was all right, inform the persons working on the project via email.
So that would be it what I 've hoped of Hudson. Otherwise, I take the whole not much. I know that I can do all this via a batch file . But that's not my goal. I want Hudson to automate so that I can start at midnight my builds / tests daily.
Do you think that at Hudson are my requirement too high?
For your help I would be very grateful , as I am stuck for days.
Here is a "basic" Hudson job
Create a new free-style software project job.
Configure that job.
(Optional) Configure triggers, such as "timer", "SCM polling", or others.
(Optional) Under Source Code Management section, select your SCM source and configure your repositories and local workspace
Under Build section, select Add build step and select:
Execute Shell if on *nix
OR
Execute Windows Batch Command if on Windows
OR
Pick whatever build-step plugin you are using.
(If using either of the "execute" build steps) Write your build/make/compile command as you would from command line.
(If using another plugin build step) Configure the plugin options according to your requirements.
(Optional) Archive the artifacts of the build with Archive the artifacts under Post-build Actions
(Optional) Execute other post-build actions
(Optional) Send out an email
Now to address your specific scenario. First things first, your question is too broad, and may get locked. Don't get discouraged if that happens, create separate question for each item individually. I cannot cover in details all these items, but I will give you an overview.
The SCM part
Based on your previous question, No Credentials to try in Hudson, I am now guessing that you are not providing Hudson with an HTTP URL to your SVN server, but trying to give it your local workspace location... Please do the command line check that I asked in that question.
You need to provide it with a proper HTTP server URL. Hudson will check out the project from the SVN URL you provided, under what is called a Workspace. The location of workspace can differ, based on your Hudson configuration, but it is a folder inside Hudson installation that is dedicated to the job. It can be referenced from within the job through %WORKSPACE% environment variable.
There are ways to use a different workspace location, but that is outside the scope of this overview. The whole SCM part is also optional, you can rely on existing file system, but this is not a good approach, and again, out of scope of this overview.
The Build step
After Hudson checked-out/updated the Workspace with your SVN, comes the building step. Hudson can do Execute Windows Batch Command by default. It can also Invoke Ant by default. (It can also do Maven, but that is not applicable to your situation)
To do other types of builds, you need a Build Wrapper plugin. In your particular case, the MSBuild plugin is probably what you want. I've never used MSBuild, so cannot give you details. Again, if you have a specific question on how to use MSBuild plugin, you should probably make a separate question with specific issues.
So, using either Execute Windows Batch Command or MSBuild plugin, configure your building step.
Running the exe???
This is very vague. You want to start the .exe and then what? Will it quit and you need an exit code? Do you want to see it on the screen? Again, this is very broad, and deserves a separate question (or read existing questions). If you just want to make a call to the .exe, you can configure a second Execute Windows Batch Command step, type there call path\to\yourfile.exe. But most likely you will not see that on screen. Read my answer here, Open Excel on Jenkins CI, on details of launching an .exe from Hudson/Jenkins that would be visible on screen.
Email
If you want a simple email, Hudson Post-Build actions has a way to send an email. For better customization options, you would want Email-Ext plugin. Once again, if you need details on how to use the email-ext plugin, create a new question (after searching existing questions first), as this is too much to cover in one question.
Conclusion
Your requirements are not too high, but Hudson is not a magic tool that will do the work for you. You still need to configure every step of it. And unless you have a Maven based project (which integrate very well with Hudson), a lot of actions will need to be done through the Execute Windows Batch Command and scripting of your own.

How to simulate fabric execution

Is there a way to tell fabric to just print the commands it will execute instead of actually execute them?
I'm preparing an installation script and if it fails I'll have to uninstall the steps previous to error occurrence.
I've checked the "fab" command parameters but found nothing about this.
Thank you for your help.
There are tickets (including issue 26) open on github that request such a feature. The challenge described in that thread is that you can't always be certain what the script would do - ie/ some behaviour may change depending on the state of the remote server.
As an alternative, you could look at reproducing your environment in a virtual machine (vagrant makes doing this really easy), and testing to see whether your scripts run as expected there.
If you're really concerned about this, a configuration management system (particularly one that can reverse changes) like puppet or chef may make more sense.

Utilizing EC2 or Azure to scale out a distributed C# build system?

Quite a few build and CI systems support steps for pushing build output to Azure, but I haven't seen any which can actually run on Azure (or EC2). Ideally I would like to be able to spin up an arbitrary number of instances (depending on the # of pending submits) to deal with the actual build + quality gates (UTs, FXCop, other static analysis tools) + source repository checkin process.
Are there existing tools which can do this, or has anyone built something which they can discuss?
Thanks!
[Edit: I found this question which is quite similar but didn't have any informative answers, so I'll keep my question alive]
If you're using Git or Mercurial for source control, AppHarbor might be what you're looking for. It's a CI build/deploy environment that runs exclusively in the cloud (EC2), and can deploy build output to Azure.
Here are some links for reference:
http://sourcecodebean.com/archives/appharbor-heroku-for-net/987
http://lostechies.com/chrismissal/2011/03/12/using-appharbor-for-continuous-integration
http://haacked.com/archive/2011/05/12/making-let-me-bing-that-for-you-open-source.aspx
http://appharbor.com/page/pricing
The open souce Jenkins CI server has an EC2 plugin that will spin up EC2 instances automatically depending on your build load. I couldn't find anything for Azure, but I highly recommend Jenkins - it's easy to configure, well maintained and has stacks of features.
Continuous Integration on Windows Azure http://code.google.com/p/cassis/ (over Mercurial)
Disclaimer: work produced by my 1st year CS students
Also Teamcity has support for this: http://www.jetbrains.com/teamcity/features/amazon_ec2.html