Upgrade to Sharepoint 2010 - Project Tracking & IT Team Workspaces not recognized - templates

I have a WSS 3 SP 3 server that has a few sites that use the Project Tracking Workspace & IT Team Workspace Site Templates. When I Upgrade the content DBs I get Errors saying:
[powershell] [SPContentDatabaseSequence] [ERROR] [4/7/2014 2:43:47 PM]: Found 2 web(s) using missing web template 75817 (lcid: 1033) in ContentDatabase WSS_Content_team.site.com.
[powershell] [SPContentDatabaseSequence] [ERROR] [4/7/2014 2:43:47 PM]: The site definitions with Id 75817 is referenced in the database [WSS_Content_team.site.com], but is not installed on the current farm. The missing site definition may cause upgrade to fail. Please install any solution which contains the site definition and restart upgrade if necessary.[powershell] [SPContentDatabaseSequence] [ERROR] [4/7/2014 2:43:47 PM]: Found 120 web(s) using missing web template 75820 (lcid: 1033) in ContentDatabase WSS_Content_team.site.com.
[powershell] [SPContentDatabaseSequence] [ERROR] [4/7/2014 2:43:47 PM]: The site definitions with Id 75820 is referenced in the database [WSS_Content_team.site.com], but is not installed on the current farm. The missing site definition may cause upgrade to fail. Please install any solution which contains the site definition and restart upgrade if necessary.
Things I've tried:
I have Downloaded the Fab40 Site Templates, Extracted the Project Tracking Workspace & IT Team Workspace templates and globally deployed the solution in the Farm.
In the folder C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\1033\XML I can see the two files, WEBTEMPProjSing.xml and WEBTEMPITTeam.xml, that in them have the template IDs of 75820 and 75817.
I've Downloaded the Templates from TechSolutions:
Project Tracking Workspace
IT Team Workspace
Installed them, globally deployed them, still the same errors.
If I query the Farm for WebTemplates they do not show up. The only time I can get them to show up in the GET-SPWebTemplate is when I deploy the tech solutions solutions to a Specific Web. Though when I do that the Template ID is 1, not the 75820 or 75817
If there was not 120 sites with the Project Tracking Workspace, I would just bag the whole sub site and recreate it. Though That's quote a bit to do for 120 Sites.
To make this even worst, I will then be upgrading these to 2013.
Any Suggestions?

With some additional Google searching I believe I found the issue.
With the Help of this blog post I was able to get the two templates I needed installed. Here is the Procedure I followed:
Start a Sharepoint Powershell in Administrator mode.
Install the ApplicationTemplateCore Solution file
Add-SPSolution -LiteralPath C:\Fab40\ApplicationTemplateCore.wsp
Wait 5 Minutes, then deploy the solution
stsadm -o deploysolution -name ApplicationTemplateCore.wsp -allowgacdeployment -immediate
Wait 5 Minutes, then Copy App Bin Content
stsadm -o copyappbincontent
Reset the IIS Server
iisreset
Install the other Solutions that you need with the same commands.
Add-SPSolution -LiteralPath C:\Fab40\ ProjectTrackingWorkspace.wsp
Wait 5 Minutes, then deploy the solution
stsadm -o deploysolution -name ProjectTrackingWorkspace.wsp -allowgacdeployment -immediate
Wait 5 Minutes, then Copy App Bin Content
stsadm -o copyappbincontent
Once you are done adding the Solutions reboot the server. I used the same commands to do this the first several times, though the Fab 40 Solutions would never be seen or installed. It must have something to do with the wait time and the resetting IIS and the reboot. That was the only combination that worked for me.

Related

Google Cloud Compute Engine: sudo broke after "dnf upgrade" on Centos 8

the company i'm working with is developing a web application based on Laravel Framework, using Google Cloud Platform infrastructures. The frontend VM is a Centos8 OS with Apache webserver installed.
Seems that a developer ran a pretty massive "dnf upgrade" which included: kernel, openssl ,kerberos and others packages.
After the upgrade, seems that ldconfig has lost his mind:
[developer#webserver ~]$ sudo su - root
sudo: error in /etc/sudo.conf, line 19 while loading plugin "sudoers_policy"
sudo: unable to load /usr/libexec/sudo/sudoers.so: /lib64/libldap-2.4.so.2: undefined symbol: EVP_md4, version OPENSSL_1_1_0
sudo: fatal error, unable to load plugins
same for other commands like "dnf" or "rpm":
[developer#webserver ~]$ rpm
rpm: symbol lookup error: /lib64/librpmio.so.8: undefined symbol: EVP_md2, version OPENSSL_1_1_0
after a little bit of investigations, i found that the same commands, specifing the LD_LIBRARY_PATH variable, are working:
[developer#webserver ~]$ LD_LIBRARY_PATH=/lib64 rpm
RPM version 4.14.3
Copyright (C) 1998-2002 - Red Hat, Inc.
This program may be freely redistributed under the terms of the GNU GPL
...
...of course, i can't do the same trick with "sudo" command.
Important fact is that the VM is still running and it was never rebooted ( i'll exaplin later why i'm sayin this )
( and finally..at the point )
The major problem is that we can't use root account cause "sudo" is not working and, by default, Google use Public Key Authentication as deafult method (Local users has random passwords genereated by GCP). So actually, i can't even do a "dnf reinstall" to try fix the issues
I was afraid that, once rebooted, every services stops to work because of the incorrect dependecies library path, so instead of doing a reboot, i have created an image based on the VM and then a new VM based on that image.
As i was thinking: Once booted the new VM, every services stopped working. i was able to read the logs over the serial console of GCP web interface.
a snippet:
...
Oct 27 20:20:30 webserver google_oslogin_nss_cache[783]: /usr/bin/google_oslogin_nss_cache: /lib64/libjson-c.so.4: no version information available (required by /usr/bin/google_oslogin_nss_cache)
Oct 27 20:20:30 webserver NetworkManager[778]: /usr/sbin/NetworkManager: symbol lookup error: /lib64/libldap-2.4.so.2: undefined symbol: EVP_md4, version OPENSSL_1_1_0
Oct 27 20:20:30 webserver google_oslogin_nss_cache[783]: /usr/bin/google_oslogin_nss_cache: symbol lookup error: /lib64/libldap-2.4.so.2: undefined symbol: EVP_md4, version OPENSSL_1_1_0
Oct 27 20:20:30 webserver sssd[771]: ldb: unable to dlopen /usr/lib64/ldb/modules/ldb/ldap.so : /lib64/libldap-2.4.so.2: undefined symbol: EVP_md4, version OPENSSL_1_1_0
...
Using Google official documentation, i found the "startup-script" section of the VM properties that can be launched at every boot and that can be used to "change" user's passwords.
I know that, by default, all VMs has root access disabled, so i made this and added to vm's "automation" script:
#! /bin/bash
echo 'developer:PASSWORD' | chpasswd
echo 'root:PASSWORD' | chpasswd
Once rebooted, i've tried to login using the "serial console" option on the web interface, but with no luck. I've also used journalctl ( as normal user ) to find something in the logs... but nothing.
i suppose that is a consequence of that "google_oslogin_nss_cache" error
there's no way to run that script.
Searching on the internet, i've found some posts where someone was able to login directly as "root" using the "gcloud compute ssh" command. So i have tried to login as described using another VM of the same project, using both my google account user and root user...but also in this way ...no luck.
( i forgot to mention that my google account has "project owner" role, so actually i have all necessary permissions )
is there another way to reset "root" password without using "sudo" or i have to reinstall the VM from start?
I'm sorry for the long explanation....hope that everything is clear
Thanks
So... actually this question is divided by 2 different issues:
The only possible way for me to recover "root" account was to stop the VM, detach the boot disk, mount the boot disk on a new VM, mount the filesystem and modify the user. once boot disk is reattached to the original VM..you can use the modified account
second issue was made by upgrading openssl...so in the end the only way to avoid that error messages was to create a new file: /etc/ld.so.conf.d/libc.conf:
/usr/lib64

Methods to automate ColdFusion Administrator settings

When working with a ColdFusion server you can access the CFIDE/administrator to set config values, which update the cfusion/lib/ xml files (e.g. neo-runtime.xml, neo-mail.xml, etc.)
I'd like to automate a deployment process that includes setting these administrator values so that I don't have to log in and manually set them for each new box that shares settings. I'm unsure of the best way to go about it.
Some thoughts I had are:
Replacing the full files with ones containing my custom settings. I've done this for local development, but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
A script to read the wddx xml file and replace the attribute values. I'm having trouble finding information about how to do this method.
Has anyone done anything like this before? Or does anyone have any recommendations on how to best go about this?
At one company, we checked all the neo-*.xml files into source control, with a set for each environment Devs only had access to the dev settings and we could deploy a local development environment with all the correct settings for new employees quickly.
but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
You have to keep up with those changes and migrate each environment appropriately.
While I was there, we upgraded from 8 to 9, 9 to 11 and from 11 to 2016. Environments would have to be mixed as it took time to verify the applications worked with each new version of CF. Each server got their correct XML files for that environment and scripts would copy updates as needed. We had something like 55 servers in production running 8 instances each, so this scaled well.
There is a very usefull tool developed by Ortus Solutions for this kind of automatizations called cfconfig that can be installed with their commandbox command line utility. This tool isn't only capable of setting configurations of the administrator: It is also capable of exporting/importing settings to a json file (cfconfig.json). It might be what you need.
Here is the link to their docs
https://cfconfig.ortusbooks.com/introduction/getting-started-guide
CFConfig worked perfectly for my needs. I marked #AndreasRu answer as accepted for introducing me to that tool! I'm just adding this response with some additional detail for posterity.
Install CommandBox as part of deployment script
Install CFConfig as part of deployment script
Use CFConfig to export a config.json file from an existing box that will share settings with the new deployment. Store this json file in source control for each type/env of box.
Use CFConfig to import the config.json as part of deployment script
Here's a simple example of what this looks like on debian
# Installs CommandBox
curl -fsSl https://downloads.ortussolutions.com/debs/gpg | apt-key add -
echo "deb https://downloads.ortussolutions.com/debs/noarch /" | tee -a /etc/apt/sources.list.d/commandbox.list
apt-get update && apt-get install apt-transport-https commandbox
# Installs CFConfig module
box install commandbox-cfconfig
# Import config settings
box cfconfig import from=/<path-to-config>/config.json to=/opt/ColdFusion/cfusion/ toFormat=adobe#11.0.19

Non-starting rails 5.2 app - ActiveSupport::MessageEncryptor::InvalidMessage

I have deployed two rails apps to Digital Ocean, Ubuntu 18.04 with Passenger and Nginx.
Both apps were built on rails 5.2.2 with ruby 2.5.1, and the second app has all the same gems at the same versions. While the first app runs fine, the second will not launch.
The last useful line of the Passenger log says:
[ E 2020-08-06 22:41:56.6186 30885/T1i age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /var/www/html/AppName_Prod/current: The application encountered the following error: ActiveSupport::MessageEncryptor::InvalidMessage (ActiveSupport::MessageEncryptor::InvalidMessage)
I know this is somethign to do with the master.key file, but that is present and contains the correct key. I'm not using environment vars to store the master keys - they are in the master.key file inside each app's dir structure.
I've read every SO post I could find on this and none have solved my issue.
Any suggestions for getting these two apps (and more) to work on the same droplet?
I'm all out of ideas.
Thank you for any help you can offer.
For anyone who might have the same issue, it was a bit deceptive.
I had tried rails credentials:edit and it didn't fix the issue, but I found that the app's containing folder was owned by user:user, whereas my other app was owned by user:root.
When I changed this, everything started to work.
I hope it helps someone because I didn't find this info anywhere online and it was a lot of trial and error.
Use ls -l to list the current owner of folders in the current working directory, so you can compare them.
For me, this turned out to be somewhat complicated. I had provisioned my server using Ansible, which has a task to copy the Nginx conf. After provisioning the server, I changed RAILS_MASTER_KEY.
It turns out that my Ansible task does not re-write the Nginx conf if it already exists on the server (the file is not compared, I guess). So although I updated RAILS_MASTER_KEY in my Ansible playbook (and it was even getting copied across to the server's environment variables!), it was not accessible to Rails through passenger because it does not pass on the user's environment variables.
Whew!
To fix this (and create a snowflake server in the process...) I manually logged into the server and updated RAILS_MASTER_KEY to my new value in the Nginx passenger_env_var.

AWSDeploy to re-deploy ASP.NET WebAPI ELB application isn't working

I am using the Visual Studio AWS add-on/plugin to deploy my application, but want to move to a CI/CD server and scripted deployment.
I've installed the AWS SDK for Windows and thus want to use the awsdeploy.exe command line to accomplish this.
I've used msbuild and a publish profile to create the .zip deployable of my application (ASP.NET WebApi project)
I've put together the following command line command:
awsdeploy.exe -r -w -v -l "C:\<path_to>\deploylog.txt" "-DDeploymentPackage=C:\<path_to>\my_app.zip" "-DAWSAccessKey=<my_access_key>" "-DAWSSecretKey=<my_secret_key>" "C:\<path_do>\AWSDeployConfiguration.txt"
The "AWSDeployConfiguration.txt" file is what was generated by VisualStudio when I did the first deployment.
RESULT:
The console output and the text written to the log is:
INFO - Scanning configuration.
INFO - ...inspecting application '<my_app_name>' for environment '<my_environment_name>' and version 'v20180918223701'
Nothing happens with the ELB application.
What am I missing and/or how do I get more information to figure this out?
I posted this question on the AWS forums and got the following answer that also worked for me.
Hi! I have this same what You when I trying run this from cmd. But it You will try check what application is returning You will see that value is 3. Generally everything !=0 is error.
What I did?
1. I checked with Process Monitor if application is doing any network request to AWS - no it even not trying. https://learn.microsoft.com/en-us/sysinternals/downloads/procmon
I decided to recompile awasdeploy.exe and I found out that in the main procedure is a try... catch.. without any logs and just return(3). I added some logs and get a detailed error - look at attached image.
After few attempts I get a list of missing dll files:
AWSSDK.MobileAnalytics.dll
AWSSDK.CognitoIdentity.dll
All these files I found in: C:\Program Files (x86)\AWS SDK for .NET\bin and just simply copied to: C:\Program Files (x86)\AWS Tools\Deployment Tool (next to awsdeploy.exe)
Now deploy is working again.

CentOS7 ccollab with perforce CL update issue

I cant get codecollaborator to upload files to for code review. I suspect I am missing some config. I have been scouring perforce and smartbear and stackover flow pages for a couple hours now no luck
CENTOS7
p4 (cant seem to find the version)
Collaborator Enterprise v11.2.11200
My p4 works totally fine have been using for months now to create CLs and submit. But now i need to upload files for code reviews.
command i ran to setup ccollab:
wget https://s3.amazonaws.com/downloads.smartbear/collaborator/11.2.11200/ccollab_client_11_2_11200_unix.sh
chmod +x ccollab_client_11_2_11200_unix.sh
./ccollab_client_11_2_11200_unix.sh
(went through install accepting entering as prompted)
ccollab login https://<codecollaborator_server> <username>
the above logs in fine no errors
ccollab --no-browser --scm perforce --server-proxy-host https://codecollaborator_server --p4user <username> --p4charset utf8 --p4client local_workspace_name --p4 /bin/p4 set
the try to upload a file
ccollab --debug addchangelist new 123456789
and get the following output:
Connecting to server at https://
Connected to Collaborator Enterprise v11.2.11200
Connected as:
Attaching changelists to review
Auto-detecting SCM System for '/my/workspace/path'
Checking client configuration for '/my/workspace/path'.
ERROR: Could not configure SCM system:
SCM system could not be auto-detected, but there was an error: Cannot run program "accurev" (in directory "/my/workspace/path"): error=2, No such file or directory
I tried to find what the "accurev" package is or how to use it but no joy.
Accurev is a different source control system. Sounds like Code Collab doesn't know that it's supposed to be using Perforce?