i want to add Nuget Package Manager along with VS2017 installation. What will be the command? - visual-studio-2017

I tried running
choco install visualstudio2017professional --params "--productKey ****-****-****-****" "--add Microsoft.VisualStudio.Component.NuGet" But NugetPackageManager is not installed. I dont have access to UI and need to do it via CLI
Expectation is that after installation is successful NuGet folder should be created at:
C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\Common7\IDE\CommonExtensions\Microsoft\NuGet

See this document. You can try command like:
vs_professional__xxx.xxx.exe --add Microsoft.VisualStudio.Component.NuGet --passive --wait
Here's the VS2017 professional download link.
I'm not sure if you need to specify the productKey, but when I test if the command worked just now, I didn't specify that. If the same command not work in your machine, you can try command like it:
vs_professional__xxx.xxx.exe --productKey xxxxx-xxxxx-xxxxx-xxxxx-xxxxx --add Microsoft.VisualStudio.Component.NuGet --passive --wait
And you don't have access to UI, you need to use --quiet or --passive switch:
--quite: Do not display any user interface while performing the installation.
--passive: Display the user interface, but do not request any interaction from the user. (Display a progress bar but not require any operaions of UI)
Note: It may need run the prompt(like cmd.exe) as admin.

Related

CentOS7 ccollab with perforce CL update issue

I cant get codecollaborator to upload files to for code review. I suspect I am missing some config. I have been scouring perforce and smartbear and stackover flow pages for a couple hours now no luck
CENTOS7
p4 (cant seem to find the version)
Collaborator Enterprise v11.2.11200
My p4 works totally fine have been using for months now to create CLs and submit. But now i need to upload files for code reviews.
command i ran to setup ccollab:
wget https://s3.amazonaws.com/downloads.smartbear/collaborator/11.2.11200/ccollab_client_11_2_11200_unix.sh
chmod +x ccollab_client_11_2_11200_unix.sh
./ccollab_client_11_2_11200_unix.sh
(went through install accepting entering as prompted)
ccollab login https://<codecollaborator_server> <username>
the above logs in fine no errors
ccollab --no-browser --scm perforce --server-proxy-host https://codecollaborator_server --p4user <username> --p4charset utf8 --p4client local_workspace_name --p4 /bin/p4 set
the try to upload a file
ccollab --debug addchangelist new 123456789
and get the following output:
Connecting to server at https://
Connected to Collaborator Enterprise v11.2.11200
Connected as:
Attaching changelists to review
Auto-detecting SCM System for '/my/workspace/path'
Checking client configuration for '/my/workspace/path'.
ERROR: Could not configure SCM system:
SCM system could not be auto-detected, but there was an error: Cannot run program "accurev" (in directory "/my/workspace/path"): error=2, No such file or directory
I tried to find what the "accurev" package is or how to use it but no joy.
Accurev is a different source control system. Sounds like Code Collab doesn't know that it's supposed to be using Perforce?

URL Rewrite 2.0 installation fails on Docker

I'm trying to get URL Rewrite 2.0 installed using this Dockerfile:
FROM microsoft/aspnet:4.6.2
WORKDIR /inetpub/wwwroot
COPY obj/Docker/publish .
ADD https://download.microsoft.com/download/C/9/E/C9E8180D-4E51-40A6-A9BF-776990D8BCA9/rewrite_amd64.msi /install/rewrite_amd64.msi
RUN net start MSIServer
RUN msiexec.exe /i c:\install\rewrite_amd64.msi /quiet /passive /qn /L*v "C:\package.log"
When I build the container image, I see this error message:
The Windows Installer Service could not be accessed. This can occur if the Windows Installer is not correctly installed. Contact your support personnel for assistance.
Looking at package.log after running the container, I see this:
SI (c) (30:A4) [08:32:10:438]: Failed to connect to server. Error: 0x80040150
SI (c) (30:A4) [08:32:10:438]: Note: 1: 2774 2: 0x80040150: 2774 2: 0x80040150
Executing net start msiserver on the running container returns a message that the service is already started, and Google says 0x80040150 could be a problem reading the registry.
Is it expected that installing URL Rewrite this way should work, or do I need to elevate permissions somehow?
Update: Running the same msiexec command on the running container successfully installs URL Rewrite.
I finally figured it out thanks to this article. Using PowerShell to run msiexec with the appropriate switches works. Oddly, it threw "Unable to connect to the remote server" when trying to also download the MSI using PowerShell, so I resorted to using ADD.
Here's the relevant portion of my Dockerfile:
WORKDIR /install
ADD https://download.microsoft.com/download/C/9/E/C9E8180D-4E51-40A6-A9BF-776990D8BCA9/rewrite_amd64.msi rewrite_amd64.msi
RUN Write-Host 'Installing URL Rewrite' ; \
Start-Process msiexec.exe -ArgumentList '/i', 'rewrite_amd64.msi', '/quiet', '/norestart' -NoNewWindow -Wait

Installing Google Cloud SDK, what is the path to the rc file?

https://cloud.google.com/sdk/docs/quickstart-mac-os-x
I downloaded the tar, and ran the install.sh
Next I got this message, however I don't see any rc / .rc file?
To update your SDK installation to the latest version [162.0.1], run:
$ gcloud components update
Modify profile to update your $PATH and enable shell command
completion?
Do you want to continue (Y/n)? y
The Google Cloud SDK installer will now prompt you to update an rc
file to bring the Google Cloud CLIs into your environment.
Enter a path to an rc file to update, or leave blank to use
[/Users/leongaban/.zshrc]:
Leon, the Cloud SDK installer provides you an option to update your $PATH as well as install shell completion for commands in the Cloud SDK. This is done by adding few lines to your shell startup script (commonly known as a rc file).
Since you selected y to go forward with this step, the installer asks for the location of the rc file (i.e. shell startup script).
It has detected that you use zsh and hence it gives you the default option to update this file /Users/leongaban/.zshrc.
If instead you are using bash you would have to specify something like /Users/leongaban/.bashrc
You could also select n in the previous step and update $PATH and/or shell completion manually too.

How to set up and use EC2 CLI on Mac?

I am stuck at using Amazon EC2 CLI.
I have downloaded the Command Line Tools from
http://aws.amazon.com/developertools/351.
I placed the bin and lib folder into my Amazon project folder: /Users/Invictus/EC2
I downloaded the cert-xxxx.pem and pk-xxx.pem into the same folder.
Created a .bash_profile in the same folder.
I tried to execute ec2-describe-images -o amazon after I moved to cd /Users/Invictus/EC2.
The system does not recognise the command: command not found.
If I try to execute the same command inside the bin folder, the result is the same.
My .bash_profile:
export EC2_HOME=~/.EC2
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY=`ls $EC2_HOME/pk-*.pem`
export EC2_CERT=`ls $EC2_HOME/cert-*.pem`
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Home/
Where did I make a mistake?
My aim is to connect to the launched instance and be able to execute commands there from my local machine.
I have Java installed.
The newer AWS Unified CLI Tools is much, much easier to set up. All you need is Python, which comes built-in to every Mac.
Here are a few things I can think of:
Your .bash_profile should be in /Users/Invictus/ , not /Users/Invictus/EC2. Move it to your home directory and log off and log back in (or restart your machine) and see if it picks up the right path.
Instead of ec2-describe-images, can you run it as "./ec2-describe-images" - does that work? If not, can you check the permissions on that script?

How to make Cygwin the default shell for Jenkins?

I'm trying to come up with some sensible solution for a build written using SCons, which relies on quite a lot of applications to be accessible in a Unix-like way, using Unix-like paths etc. However, when I'm trying to use SCons plugin, or Git plugin in Jenkins, it tries to invoke the plugins using something like cmd /c git.exe - and this will certainly fail, because Git was installed using Cygwin and is only known in Cygwin shell, but not in CMD. But even if I could make git and the rest available to cmd.exe, other problems arise: the Cygwin version of Git expects paths to have forward slashes and treats backward slashes as escape characters. Idiotic Windows file-system related issues kick in too (I can't give Jenkins permissions to delete my own files!).
So, is there a way to somehow make Jenkins only use Cygwin shell, and never cmd.exe? Or should I be prepared to run some Linux in a VM to have this handled?
You could configure Jenkins to execute the cygwin command with the specific shell command, as follows:
c:\cygwin\bin\mintty --hold always --exec /cygdrive/c/path/to/bash/script.sh
Where script.sh will execute all the commands needed for the Jenkins execution.
Just for the record here's what I ended up doing:
Added a user SYSTEM to Cygwin, mkpasswd -u SYSTEM
Edited /etc/passwd by adding the newly created user's home directory to the record. Now it looks something like the below:
SYSTEM:*:18:544:,S-1-5-18:/home/SYSTEM:
Copied my own user's configuration settings such as .netrc, .ssh and so on into the SYSTEM home. Then, from Windows Explorer, through an array of popups I've claimed ownership of all of these files to SYSTEM user. One by one! I love Microsoft!
In Jenkins I now run a wrapper for my build that sets some other environment variables etc. by calling c:\cygwin\bin\bash --login -i /path/to/script/script
Gave it up because of other difficulties in configuration and made Jenkins service run under my user rather then SYSTEM. Here's a blog post on how to do it: http://antagonisticpleiotropy.blogspot.co.il/2012/08/running-jenkins-in-windows-with-regular.html but, basically, you need to open Windows services, then find Jenkins service, select it's properties, go to "Logon" tab and change the user to the "this user".
One way to do this is to start your "execute shell" build steps with
#!c:\cygwin\bin\bash --login
The trick is of course that it resets your current directory so you need to
cd `cygpath $WORKSPACE`
to get back to the workspace.
Adding to thon56's good answer: this is helpful: "set -ex"
#!c:\cygwin\bin\bash --login
cd `cygpath $WORKSPACE`
set -ex
Details:
-e to exit on error. This is important if you want your jobs to fail on error.
-x to echo command to the screen, if desired.
You can also use #!c:\cygwin\bin\bash --login -ex, but that echos a lot of login steps that you most likely don't care to see.