CentOS7 ccollab with perforce CL update issue - centos7

I cant get codecollaborator to upload files to for code review. I suspect I am missing some config. I have been scouring perforce and smartbear and stackover flow pages for a couple hours now no luck
CENTOS7
p4 (cant seem to find the version)
Collaborator Enterprise v11.2.11200
My p4 works totally fine have been using for months now to create CLs and submit. But now i need to upload files for code reviews.
command i ran to setup ccollab:
wget https://s3.amazonaws.com/downloads.smartbear/collaborator/11.2.11200/ccollab_client_11_2_11200_unix.sh
chmod +x ccollab_client_11_2_11200_unix.sh
./ccollab_client_11_2_11200_unix.sh
(went through install accepting entering as prompted)
ccollab login https://<codecollaborator_server> <username>
the above logs in fine no errors
ccollab --no-browser --scm perforce --server-proxy-host https://codecollaborator_server --p4user <username> --p4charset utf8 --p4client local_workspace_name --p4 /bin/p4 set
the try to upload a file
ccollab --debug addchangelist new 123456789
and get the following output:
Connecting to server at https://
Connected to Collaborator Enterprise v11.2.11200
Connected as:
Attaching changelists to review
Auto-detecting SCM System for '/my/workspace/path'
Checking client configuration for '/my/workspace/path'.
ERROR: Could not configure SCM system:
SCM system could not be auto-detected, but there was an error: Cannot run program "accurev" (in directory "/my/workspace/path"): error=2, No such file or directory
I tried to find what the "accurev" package is or how to use it but no joy.

Accurev is a different source control system. Sounds like Code Collab doesn't know that it's supposed to be using Perforce?

Related

Methods to automate ColdFusion Administrator settings

When working with a ColdFusion server you can access the CFIDE/administrator to set config values, which update the cfusion/lib/ xml files (e.g. neo-runtime.xml, neo-mail.xml, etc.)
I'd like to automate a deployment process that includes setting these administrator values so that I don't have to log in and manually set them for each new box that shares settings. I'm unsure of the best way to go about it.
Some thoughts I had are:
Replacing the full files with ones containing my custom settings. I've done this for local development, but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
A script to read the wddx xml file and replace the attribute values. I'm having trouble finding information about how to do this method.
Has anyone done anything like this before? Or does anyone have any recommendations on how to best go about this?
At one company, we checked all the neo-*.xml files into source control, with a set for each environment Devs only had access to the dev settings and we could deploy a local development environment with all the correct settings for new employees quickly.
but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
You have to keep up with those changes and migrate each environment appropriately.
While I was there, we upgraded from 8 to 9, 9 to 11 and from 11 to 2016. Environments would have to be mixed as it took time to verify the applications worked with each new version of CF. Each server got their correct XML files for that environment and scripts would copy updates as needed. We had something like 55 servers in production running 8 instances each, so this scaled well.
There is a very usefull tool developed by Ortus Solutions for this kind of automatizations called cfconfig that can be installed with their commandbox command line utility. This tool isn't only capable of setting configurations of the administrator: It is also capable of exporting/importing settings to a json file (cfconfig.json). It might be what you need.
Here is the link to their docs
https://cfconfig.ortusbooks.com/introduction/getting-started-guide
CFConfig worked perfectly for my needs. I marked #AndreasRu answer as accepted for introducing me to that tool! I'm just adding this response with some additional detail for posterity.
Install CommandBox as part of deployment script
Install CFConfig as part of deployment script
Use CFConfig to export a config.json file from an existing box that will share settings with the new deployment. Store this json file in source control for each type/env of box.
Use CFConfig to import the config.json as part of deployment script
Here's a simple example of what this looks like on debian
# Installs CommandBox
curl -fsSl https://downloads.ortussolutions.com/debs/gpg | apt-key add -
echo "deb https://downloads.ortussolutions.com/debs/noarch /" | tee -a /etc/apt/sources.list.d/commandbox.list
apt-get update && apt-get install apt-transport-https commandbox
# Installs CFConfig module
box install commandbox-cfconfig
# Import config settings
box cfconfig import from=/<path-to-config>/config.json to=/opt/ColdFusion/cfusion/ toFormat=adobe#11.0.19

AWSDeploy to re-deploy ASP.NET WebAPI ELB application isn't working

I am using the Visual Studio AWS add-on/plugin to deploy my application, but want to move to a CI/CD server and scripted deployment.
I've installed the AWS SDK for Windows and thus want to use the awsdeploy.exe command line to accomplish this.
I've used msbuild and a publish profile to create the .zip deployable of my application (ASP.NET WebApi project)
I've put together the following command line command:
awsdeploy.exe -r -w -v -l "C:\<path_to>\deploylog.txt" "-DDeploymentPackage=C:\<path_to>\my_app.zip" "-DAWSAccessKey=<my_access_key>" "-DAWSSecretKey=<my_secret_key>" "C:\<path_do>\AWSDeployConfiguration.txt"
The "AWSDeployConfiguration.txt" file is what was generated by VisualStudio when I did the first deployment.
RESULT:
The console output and the text written to the log is:
INFO - Scanning configuration.
INFO - ...inspecting application '<my_app_name>' for environment '<my_environment_name>' and version 'v20180918223701'
Nothing happens with the ELB application.
What am I missing and/or how do I get more information to figure this out?
I posted this question on the AWS forums and got the following answer that also worked for me.
Hi! I have this same what You when I trying run this from cmd. But it You will try check what application is returning You will see that value is 3. Generally everything !=0 is error.
What I did?
1. I checked with Process Monitor if application is doing any network request to AWS - no it even not trying. https://learn.microsoft.com/en-us/sysinternals/downloads/procmon
I decided to recompile awasdeploy.exe and I found out that in the main procedure is a try... catch.. without any logs and just return(3). I added some logs and get a detailed error - look at attached image.
After few attempts I get a list of missing dll files:
AWSSDK.MobileAnalytics.dll
AWSSDK.CognitoIdentity.dll
All these files I found in: C:\Program Files (x86)\AWS SDK for .NET\bin and just simply copied to: C:\Program Files (x86)\AWS Tools\Deployment Tool (next to awsdeploy.exe)
Now deploy is working again.

Cannot chmod file on Openshift online v3 : Operation not permitted

I am migrating a Django application from Openshift v2 to v3 (In case you don't know, RedHat is shutting down v2 on September 30th, see: https://blog.openshift.com/migrate-to-v3-v2-eol/)
So, I am following this blog post to help me: https://blog.openshift.com/migrating-django-applications-openshift-3/ . I am new to all these Docker / Kubernetes concepts the new version is build upon.
I was able to make some progress : I managed to get a successful build of my app. Yet it crashes at deployment time:
---> Running application from script (app.sh) ...
/usr/libexec/s2i/run: line 42: /opt/app-root/src/app.sh: Permission denied
Indeed, app.sh has lost its x permission. I log into the failing container as debug and see it:
> oc debug dc/<my app>
> (app-root)sh-4.2$ ls -l /opt/app-root/src/app.sh
-rw-rw-r--. 1 default root 127 Sep 6 21:20 /opt/app-root/src/app.sh
The blog posts states "Ensure that the app.sh file is executable by running chmod +x app.sh.", which I did on my local repo. Whatever, I want to do it again directly in the pod, but it doesn't work:
(app-root)sh-4.2$ chmod +x /opt/app-root/src/app.sh
chmod: changing permissions of ‘/opt/app-root/src/app.sh’: Operation not permitted
So, how can I set the x permission to app.sh ? Thank you
Without looking into more details, any S2I builder image will gladly use your custom supplied run script to start the application in an alternative way.
Create .s2i/bin/ (mind the dot) in your source code directory, place the run script into it and rebuild the app in OpenShift - it will automatically use your custom run script upon deployment.
This is the preferred way of starting applications using custom commands in OpenShift.
Regarding your immediate problem, there is a very simple reason why you can not change the permissions of the script: you were trying to modify the permissions in the deployed pod, and not the builder pod. Deployed pods run using different UIDs, usually somewhere in the range of 100000000, and definitely do not match the file ownership as generated by the build. Hence permission denied.
The root cause of your problem (app.sh losing executable permissions) must be in the way the build process installs those files, and indeed looking at the /usr/libexec/s2i/assemble script in the base image does seem to reveal the culprit. The last two lines are:
# set permissions for any installed artifacts
fix-permissions /opt/app-root
If you wanted to change this part of the build instead of using a custom run script, I suggest you then create .s2i/bin/assemble in your project's source code and make it look sort of like this:
#!/bin/bash
echo "Running stock build:"
${STI_SCRIPTS_PATH}/assemble
echo "Fixing the mess:"
chmod 755 /opt/app-root/src/app.sh
This will fix whatever the stock build process does to file permissions, and will do it using the same UID as the rest of the build, so file ownership shouldn't be an issue.
as I stumbled upon this issue myself I've found a way to resolve it.
You have to make your file app.sh executable and push it in your repo as such.
If git does not track this modification as it did for me, you have to use: git update-index --chmod=+x app.sh for it to work.

Google cloud compute startup script ignored with no logging

I have a standard Debian 8.9 instance on google cloud compute (GCE) where my startup script is ignored.
In the custom metadata field, for startup-script, I am trying to run an Rscript (which is used for batch execution of R files), followed by a system shutdown, with the following:
#! /bin/bash
sudo /usr/bin/Rscript /home/myuser/launch_script.R
sudo shutdown -h now
Starting the instance is immediately followed by a shutdown and the Rscript is ignored. Removing the last line to shutdown causes the GCE instance to start, but the Rscript to be ignored. Running just "sudo /usr/bin/Rscript /home/myuser/launch_script.R" from the terminal results in the script being run. It has a chmod of 755, so I don't think this is a permissions issue.
In addition to this problem, I have read elsewhere that logging should happen in /var/log/, but there is nothing there. Instead, I have a bunch of log files (that only contain the start-up script and nothing else) in the root of my instance:
I got in touch with Google cloud support, who gave the following response:
script definition is kept under /var/run/google.startup.script
If the script does not run initially, you can force it manually with : $ sudo google_metadata_script_runner --script-type startup # for Debian, or # sudo /usr/share/google/run-startup-scripts # on Ubuntu and older images
I'm posting this information here, because it is not in their documentation (as of August 2017). I'm not sure how helpful it is, since the google.startup.script didn't exist in my case (using the latest Debian image on GCE), but I did run the other commands.
However, I think my main issues were:
I was using autossh to connect to a remote database. The startup-script was running before autossh. Building a 40 second delay into the script and running the script as a user (not sudo-type root) seems to have solved this problem for now. Autossh was being run as the main user, which I think gets loaded before lower-privilege user-defined scripts get loaded.
I was using some gcloud commands from the user account which had its own authentication issues. Running gcloud auth login as the user and ensuring correct permissions on my private key solved this.
Always remember to check the messages and syslog files in /var/log for troubleshooting. This allowed me to see the order of things being loaded at system-boot.

Neo4j on Heroku with Python

I'm trying to get the Neo4j on Heroku and Getting Started with Python on Heroku tutorials up and going. The Neo4j one works fine, but the Python one has problems.
For anyone else trying to follow this tutorial, I've recorded the
problems and my solutions to help you out as well.
This is all done on a Win7 x64 dev machine.
Q1) "virtualenv venv --distribute" - errors with:
'virtualenv' is not recognized as an internal or external command, operable program or batch file.
A1) The workaround is to fully qualify the path to:
"C:\Python27\Scripts\virtualenv venv --distribute"
Q2a) "foreman start" - errors with:
'foreman' is not recognized as an internal or external command, operable program or batch file.
A2) Looks like a path issue so I ran the line:
"set PATH=%PATH%;C:\Program Files (x86)\Heroku\ruby-1.9.2\bin\"
Q2b) "foreman start" now errors:
Bad file descriptor
{Ruby paths...}
A2b) Help?
So I can't run the app locally, but maybe still on the server, so moving on...
Q3) .gitignore - can't create this file on Windows.
A3) Clone another project and copy that file and edit.
Q4) "git push heroku master" - errors with:
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository exists.
A4) Apparently need to create new SSH keys. Managing Your SSH Keys Note again fully quality the path such as below and select the new key to add to heroku.
"c:\Program Files (x86)\Git\bin\ssh-keygen.exe" -t rsa"
Q5) Try "git push heroku master" the same with the Neo4j test app "flask-py2neo" - errors during the compile. Is this example current?
A5) Remove distribute from requirements.txt.
Any ideas?