Best practice for using codacy in my jenkins pipeline - codacy

Currently, I have subscribed to codacy pro.
And I want to use codacy in my Jenkins pipeline, and I found codacy-analysis-cli.
I tried to do a test on my local using this command:
codacy-analysis-cli analyze --directory /home/codacy/backend-service --project-token <myprojectoken> --allow-network --verbose --upload
But when I checked app.codacy.com, there were no recent results.
Can you help me please? what is the best practice for using codacy in my jenkins pipeline.

That command seems about right, on this cases, is better to reach out to Codacy support specifying what is the repository that you are trying send results to. So you can get specific help for your repository.
That said, make sure that:
The project-token is set for the correct repository
The settings have the Tun analysis through build server enabled
If you still have problems, better to reach out to support.

Related

Best practice to install Tableau Server as IaC

I am trying to figure it out which one is the best practice when creating a new server on AWS EC2.
To do that I choose Tableau Server. First time, as Tableau docs recommend I did the install myself, but I would like to keep as automatic as possible everything, the idea behind that is if ec2 get destroyed how can I recover everything fast?
I am using terraform to store as a code all the AWS infrastructure, but the installation itself is not automatic yet.
To do that, I have two options, ansible (never worked before) or in this particular case Tableau has an automated install script in python, which I could add in the EC2 template launch configuration,and then using terraform I can raise it in minutes.
Which one should be the choosen why? Both seems to acomplish the final goal.
Also it raises some kind of doubs such as:
It retrieves the server up and with a full instalation of the software, but to get all users, and all the Tableau setup I have to raise anyways an snapshot, right? Is there any other tool to do that?
Then, if the manual install of the software is fast enough, why then I should use IaC to keep the install as code, instead of document the script of installation? And just keep the Infrastructure as code?

How can you find out Azure-pipeline image content?

I'm new to Azure-Pipeline and struggling to put together a C++ oriented pipeline that uses camke which properly compiles, run tests and build documentation on Ubuntu, macOS, and Windows.
I managed the macOS and Ubuntu cases rather easily but am struggling with the Windows case not knowing what's installed and what's in system PATH for the given image & container I've selected.
Not being super familiar with the Azure-Platform I'm basically relying on commit-push-run-pipeline every single little change to my YAML file thus wasting time and resources.
I can't imagine that the only way is to blindly try out commands by commit, push and run the pipeline.
I managed to find a basic description of the currently (hopefully) available images here following the included software link for Windows link yoou end up on a comprehensive list of what's supposedly installed (I have some doubts on whether this documentation actually matches the content of the image). Calling some of those tools like cmake and choco, present in the above list, failed. Whether or not they're actually installed and in system PATH I have no idea.
Q1: Is there any way to locally test out an Azure-Pipeline YAML?
Q2: Is there any way to figure what is actually installed on a given image/container (without issuing a DIR /s from the root folder??)
Q3: Is it possible to connect to a running container (or is it a VM???) instance and directly tinker with it?
Q4: Alternatively, is it possible to run such an image locally (Docker)? Does it imply execution on a Windows machine or is that a standalone VM image?
EDIT: Found out about this question, although doesn't quite answer mine: Is there a tool to validate an Azure DevOps Pipeline locally?
Q1: Is there any way to locally test out an Azure-Pipeline YAML?
The answer is yes. You could create your private agent to execute the Azure-Pipeline YAML.
Self-hosted agents
Q2: Is there any way to figure what is actually installed on a given
image/container (without issuing a DIR /s from the root folder??)
Just as you know, we could check the document Software for the software installed on the agent. If you want to know the install the path of some software, you could check the debug log from the build task. For example, cmake. We could check the build log from the cmake task:
Q3: Is it possible to connect to a running container (or is it a
VM???) instance and directly tinker with it?
For the hosted agent, I am afraid the answer is not.
Q4: Alternatively, is it possible to run such an image locally
(Docker)? Does it imply execution on a Windows machine or is that a
standalone VM image?
The answer is yes, we could Run a self-hosted agent in Docker. And it imply execution on a Windows machine.

How to get SonarQube results back to CodeBuild

I've seen many discussions on-line about Sonar web-hooks to send scan results to Jenkins, but as a CodePipeline acolyte, I could use some basic help with the steps to supply Sonar scan results (e.g., quality-gate pass/fail status) to the pipeline.
Is the Sonar web-hook the right way to go, or is it possible to use Sonar's API to fetch the status of a scan for a given code-project?
Our code is in BitBucket. I'm working with the AWS admin who will create the CodePipeline that fires when code is attempted to be pushed into the repo. sonar-scanner will be run, and then we'd like the pipeline to stop if the quality does not pass the Quality Gate.
If I would use a Sonar web-hook, I imagine the value for host would be, what, the AWS instance running the CodeBuild?
Any pointers, references, examples welcome.
I created a powershell to use with Azure DevOps, that possible may be migrated to some shell script that runs in the code build activity
https://github.com/michaelcostabr/SonarQubeBuildBreaker

Configure processing server role with config patches

The Sitecore documentation provides some pretty clear instructions on how to configure a Sitecore instance as a processing server:
https://doc.sitecore.net/sitecore_experience_platform/xdb_configuration/configure_a_processing_server
However, many of those steps require enabling/disabling of files manually on the installed server. Has anybody seen or built a patch file (similar to SwitchMasterToWeb) that can disable/enable the appropriate functionality as a patch? I would rather not touch the default Sitecore install and instead rely on automated deployment of configuration patches.
I haven't seen this as a patch and not sure if its possible to do this with just one patch (would love to be proved wrong), but for something like this I've used a Powershell script.
I set up Octopus Deploy to run a Powershell script step after deploy to disable files and change settings if patch files can't do the job.
I can highly recommend the Powercore tools for this kind of thing.
https://github.com/adoprog/Sitecore-PowerCore/tree/master/Framework/ConfigUtils
If anybody else winds up looking for this, I've posted some work up on GitHub for patch files for a variety of versions for 8.0:
https://github.com/jst-cyr/Sitecore-Role-Configs
The patches there will do the 'disable/enable/change' for authoring, delivery, or processing. I don't have one for the reporting server.
Sitecore has evaluated POC for same. At this point of time applciable for Sitecore CMS 8.1 rev. 160302 (Update-2). See here-
https://github.com/Sitecore/Sitecore-Configuration-Roles

Django and multi-stage servers

I am working with a client that demands multi-stage server setup: development server, stage server and production/live server.
Stage should be as stable as it can be to test all those new features we develop at the development server and take this to the live server in the end.
We use git and github for version controlling. I use Ubuntu server edition as the OS.
The problem is, I have never working in such multi-stage server plan. What software/projects would you recommend to do a proper way of handling such setup, especially deployment and moving a new feature developed to the stage and then to the live server ?
We use two different methods of moving code from environment to environment. The first is to use branches and triggers with our source control system (mercurial in our case, though you can do the same thing with git). The other, is to use fabric, a python library for executing shell code across a number of servers.
Using source control, you can have several main branches, like production development staging. Say you want to move a new feature into staging. I'll explain in terms of mercurial, but you can port the commands over to git and it should be fine.
hg update staging
hg merge my-new-feature
hg commit -m 'my-new-feature > staging'
hg push
You then have your remote source control server push to all of your web servers using a trigger. A trigger on each web server will then do an update and reload the web server.
To move from staging to production, it's just as easy.
hg update production
hg merge staging
hg commit -m 'staging > production'
hg push
It's not the nicest method of deployment, and it makes rolling back quite hard. But it's quick and easy to set up, and still a lot better than manually deploying each change to each server.
I won't go through fabric, as it can get quite involved. You should read their documentation so you understand what it is capable of. There are plenty of tutorials around for fabric and django. I highly recommend the fabric route as it gives you lots more control, and only involves writing some python.
There is a nice branching model for git (as it is also used by github itself for example). You can easily apply this branching model using git-flow, which is a git extension that enables you to apply some high level repository operations that fit into this model. There's also a nice blogpost about this.
I do not know what exactly you want to automize in your deployment workflow, but if you apply the model mentioned above, most of the correct version handling is done by git.
To add some further automatic processing to this, fabric is a simple but great tool, and you will find many tutorials about its usage (also in combination with git).
For handling python dependencies using virtualenv and pip is for sure a very good way to go.
If you need something more complex,eg. to handle more than one django instance on one machine and for handling system wide dependencies etc checkout puppet or chef.
Try Gondor.io or Ep.io, they both make it pretty easy (gondor especially excels in this area) to have two+ instances with very similar code, from your VCS -- and to move data back and forth. (if you need an invite, ask either in IRC, but if I recall, they're both open now)