I'd like to ask, how may I do a migration of mappings, worklets and workflows from Informatica PowerCenter Integ, to Prod.
Integ Enviroment and Prod are in different servers, so I can't just mouve folder from folder.
Is it possible? I can't find any refernece or tutorial.
Thank you in advance.
In Powercenter, its possible to copy form one env to another. Request everyone to check in their objects first adn log off from both source and target repo.
Open Repository Manager, connect to the source repository and select the folder you want to copy.
Click Edit > Copy.
Connect to the target repository. Connect to the target repository with the same user account used to connect to the source repository. If you do not have same user you need to use deployment group/deployment folder.
In the Navigator, select the target repository, and click Edit > Paste. You will get many options like - replacing objects, use latest version, check out etc. You can follow below link to get help.
https://docs.informatica.com/data-integration/powercenter/10-5/repository-guide/copying-folders-and-deployment-groups/copying-or-replacing-a-folder/steps-to-copy-or-replace-a-folder.html
Now, my preference would be to use deployment group or deployment folder. Its easy to use and easy to control - like if you want to replace 10 objects out of 100s, or you want to create a standard process for future migrations, or deploy using command task automatically, you can do as well.
Related
I am deploying a Django application using the following steps:
Push updates to GIT
Log into AWS
Pull updates from GIT
The issue I am having is my settings production.py file. I have it in my .gitignore so it does not get uploaded to GITHUB due to security. This, of course, means it is not available when I PULL updates onto my server.
What is a good approach for making this file available to my app when it is on the server without having to upload it to GITHUB where it is exposed?
It is definitely a good idea not to check secrets into your repository. However, there's nothing wrong with checking in configuration that is not secret if it's an intrinsic part of your application.
In large scale deployments, typically one sets configuration using a tool for that purpose like Puppet, so that all the pieces that need to be aware of a particular application's configuration can be generated from one source. Similarly, secrets are usually handled using a secret store like Vault and injected into the environment when the process starts.
If you're just running a single server, it's probably just fine to adjust your configuration or application to read secrets from the environment (or possibly a separate file) and set those values on the server. You can then include other configuration settings (secrets excluded) as a file in the repository. If, in the future, you need more flexibility, you can pick up other tools in the future.
We have a microservice project architecture where there is a single project repository with several folders. Each folder has files etc for a specific API. We would like to have that as a single repo but configure separate jobs in jenkins for each API folder. As such we would like to know how to use same repo for scm checkout in jenkins but trigger builds for commits made only to the folders where changes are made. i know it supports regex to include and exclude. But would like to know how best to use that.
So say for example i have a project sample-project with 3 folders abc, def and xyz.
we now have a job in jenkins that checkouts sample-project. Now we would like that jenkins job to be configured in a way that only when anything inside abc folder is changed or committed, it triggers that job otherwise not. How to best implement this.
I am looking for a way to share the EB configuration so anyone in my team with valid aws creds can deploy the code. By default, EB adds following to your .gitignore file.
# Elastic Beanstalk Files
.elasticbeanstalk/*
!.elasticbeanstalk/*.cfg.yml
!.elasticbeanstalk/*.global.yml
Do I need to check-in these files to share it with the team?
In my opinion, AWS royally messed up with their .gitignore defaults. This was confusing at first, because it seemed like it was there for a good reason. We couldn't find a good reason. Maybe it was just a precaution so you didn't commit something you shouldn't. However, firstly, modifying a project's .gitignore is not something it should be doing by default, in my opinion. And secondly, no one should be committing code they haven't reviewed.
As Kush notes in his reply, you can add the files into a nested directory which would be tracked by your VCS. I'm assuming the reason for this is so that different developers can maintain different configurations. We have zero use for anything remotely resembling this, but it's worth noting as I'm sure someone may.
We've completely removed these entries from our project and commit the entire .elasticbeanstalk and .ebextensions directories.
Assuming you have CLI Access you can create template and share a command like:
eb config save dev-env --cfg prod
Now, open this file in a text editor to modify/remove sections as necessary for your production environment.
Note: AWSConfigurationTemplateVersion is a required field. Do not remove it from the configuration file.
Checking Configurations into Version Control
If you want to check in your saved configurations so that anyone with access to your code can use the same settings in their own environments or if you want to track different versions of the saved configurations, move the file to the .elasticbeanstalk/folder directory. Saved configurations are located in the .elasticbeanstalk/saved_configs/ folder. By moving the configuration file up one level into the .elasticbeanstalk/ folder, the file can be checked in and will still work with the EB CLI. After you move the file, you must add and commit it.
Refer this AWS Blog Post
I have a single Cloud Source Repository with multiple projects. I am able to create a cloudbuild.yaml file in the repo root that builds all projects. However, I don't want to have a build trigger that rebuilds all of the projects since most commits will be for a single project. Ideally I would like to have a cloudbuild.yaml file in each project subdirectory and a build trigger that detects changes in the project subdirectory of the repository. Is something like this possible?
As a possible workaround, I believe I may be able to keep my cloudbuild.yaml in the repository root and create a custom step that will get the commit sha (via the COMMIT_SHA substitution) and then get the list of files committed (via "git show --name-only --pretty=format: $COMMIT_SHA") to determine which project should be built and what image should be created. An alternative may be to have a tagging naming convention that will contain the project name and basing the trigger on that but I don't want to tag each commit.
Note, it seems like build triggers work very well when you have multiple repos but we made the decision to go with a mono repo and I don't want to rehash that debate in this question. I'd like to understand how to best use the Build Triggers in a mono repo.
I am building out a Sitecore farm with multiple Content Delivery servers. In the current process, I stand up the CD server and go through the manual steps of commenting out connection strings and enabling or disabling config files as detailed here per each virtual machine/CD server:
https://doc.sitecore.net/Sitecore%20Experience%20Platform/xDB%20configuration/Configure%20a%20content%20delivery%20server
But since I have multiple servers, is there any sort of global configuration file where I could dictate the settings I want (essentially a settings template for CD servers), or a tool where I could load my desired settings/template for which config files are enabled/disabled etc.? I have used the SIM tool for instance installation, but unsure if it offers the loading of a pre-determined "template" for a CD server.
It just seems in-efficient to have to stand up a server then config each one manually versus a more automated process (ex. akin to Sitecore Azure, but in this case I need to install the VMs on-prem).
There's nothing directly in Sitecore to achieve what you want. Depending on what tools you are using then there are some options to reach that goal though.
Visual Studio / Build Server
You can make use of SlowCheetah config transforms to configure non-web.config files such as ConnetionStrings and AppSettings. You will need a different build profiles for each environment you wish to create a build for and add the appropriate config transforms and overrides. SlowCheetah is available as a nuget package to add to your projects and also a Visual Studio plugin which provides additional tooling to help add the transforms.
Continuous Deployment
If you are using a continuous deployment tool like Octopus Deploy then you can substitute variables in files on a per environment and machine role basis (e.g. CM vs CD). You also have the ability to write custom PowerShell steps to modify/transform/delete files as required. Since this can also run on a machine role basis you can write a step to remove unnecessary connection strings (master, reporting, tracking.history) on CD environments as well as delete the other files specified in the Sitecore Configuration Guide.
Sitecore Config Overrides
Anything within the <sitecore> node in web.config can be modified and patch using Include File Patching Facilities built into Sitecore. If you have certain settings which need to be modified or deleted for a CD environment then you can create a CD-specific override, which I place in /website/App_Config/Include/z.ProjectName/WebCD and use a post-deployment PowrrShell script in Octopus deploy to delete this folder on CM environment. There are example of patches within the Include folder, such as SwitchToMaster.config. In theory you could write a patch file to remove all the config sections mentioned in the depoyment guide, but it would be easier to write a PowerShell step to delete these instead.
I tend to use all the above to aid in deploying to various environments for different server roles (CM vs CD).
Strongly recommend you take a look at Desired State Configuration which will do exactly what you're talking about. You need to set up the actual configuration at least once of course, but then it can be deployed to as many machines as you'd like. Changes to the config are automatically flowed to all machines built from the config, and any changes made directly to the machines (referred to as configuration drift) are automatically corrected. This can be combined with Azure, which now has capability to act as a "pull-server" through the Automation features.
There's a lot of reading to do to get up to speed with this feature-set but it will solve your problem.
This is not a Sitecore tool per se.