Source in mapping is not migrating properly through fcat in informatica - informatica

I am facing some issue while migrating the work flow to another repository using fcat. Actually what I did is firstly I changed the field lenght like 40 to 100 in source to till target after we did a test run in Dev repository it is fine in. After we are migrating same work flow to UAT repository through fcat but after migrating to UAT the field lenght is still 40 only. Can any one give some suggestion on how to resolve this.
Thanks.

Related

Deploy only changed files on AWS Elastic Beanstalk website

I have successfully deployed my website on AWS Elastic Beanstalk. Now I want to change the code in one of my file.
If I do eb deploy, it will completely deploy a new version of my code which I don't want. I already have an updated DB on Elastic Beanstalk. If I deploy the whole code again, it will overwrite my DB file.
How can I deploy the changed file only successfully?
This may not be the answer you're looking for, but I would highly recommend deleting this file from your code repository. Hopefully you're using a version control system like Git; if you want to keep the original file for historical purposes, I would create an entirely different repository and put it in there.
Why? Even if you did come up with a solution to only deploy changed files...would you really want to trust it? If there's any problem with the solution you came up with, you would entirely erase/overwrite your production database. Not good.
In addition, if you want to build a really robust system to entirely create your app from scratch in AWS, take a look at Cloud Formation. It takes some learning and work, but you can build a script -- and maintain it in version control -- that will scaffold your entire cloud infrastructure.

Create a new GCP project from existing

I created a Project on GCP. It has a postgres database, a node Appengine web app, and some other stuff. Now I am developing the app, and when everything is set up and running nicely I'd like to clone this project somehow and create a staging and a production environment/project.
So my project now is called dev-awesomeapp. Can I somehow make a staging-awesomeapp for staging and a awesomeapp for production from my existing dev-awesomeapp?
Edit: there is an other question from 2017 that asks the same thing, but maybe it's possible now after 2,5 years?
You can't, but if you don't want to configure everything form the beginning each time, you can use "architecture as code" with tools like deployment manager or Terraform.
This could help you in replicating your infrastructure, moreover it can be really helpful in automating any architectural changes if you use it in a CI/CD pipeline, making your release phase quicker and more reliable :)

Does the main Nexus repository have to be named: "central"?

G'day all ... I have been looking at stackoverflow about the use of a local or cluster nexus repository. Anyway I've had a Nexus repository on my PC and my laptop for a number of years with little bother.
I think more projects are using Nexus themselves and I've had three or four issues just in the last few weeks (since December) where there can be clashes with any artefact from an id of: "central". I'm seeing problems like this one more often:
Nexus won't download artifacts from Central
On my laptop, I seem to have resolved problems to date, by naming the main repository (something like) "nexus-local". But not on my desktop workstation. Strange.
How strange? Well an empty 'archetype' project for Vert.X with the same attributes compiles fine on the laptop and fails on the workstation, complaining about not finding something from a "central" named repository.
Upon inspection, I noticed that even though I've renamed the workstation repository as "nexus-local" there's some internal ID that remains "central".
There are Nexus repository settings files, e.g. nexus.xml, showing the internal(??) central name/Id. So the question viewers ...
Does the local mirror have to be called "central"?
If not, how does one rename it 'responsibly'?
Alternatively, is there a simple demo for a maven/nexus setup cookbook some place that doesn't require me to read 3 x books first and compiling the knowledge for a simple-solo set-up?
Would dropping Nexus altogether and restarting the server 'fix' this kind of issue?
Maven 3 has a built in repository called "central" and you need to override this in your settings.xml as described in the Nexus documentation. You then combine this with a repository group of any name and use that as a mirrorOf * (including central).
The repository group can contain any repositoy you like as well. The default is that it contains a proxy repository of the Central Repository and the three hosted repositories releases, snapshots and thirdparty.
If you need more .. you just add them to the group.
And if you are looking for a simple step by step example .. check out the Nexus eval guide chapter about proxy and publish and the used example project.

Coldfusion continuous Integration

let me begin by saying I 'm a coldfusion newbie.
I 'm trying to research if its possible to do the following and what would be the best approach to achieve it.
Whenever a developer checks in code into SVN, I would like to do a get all the new changes/files and do an auto build to check if the code can be deployed successfully to production server. I guess there are two parts to it, one syntax checking and second integration test(if functionality is working as expected). For the later part some unit test tools would have to be used.
Can someone comment on their experience doing something similar for coldfusion.
Sorry for being a bit vague...I know its a very open-ended question but any feedback would be appreciated.
Thanks
There's a project called "Cloudy With A Chance of Tests" that purports to do what you require. In particular it brings together a number of other CFML code analysis projects (VarScope & QueryParam) to check code, as well as unit testing. I am not currently using it myself but did have a look at it some time ago (more than 12 months) and it appeared to be quite good.
https://github.com/mhenke/Cloudy-With-A-Chance-Of-Tests
Personally I run MXUnit tests in Jenkins using the instructions from the MXUnit site - available here:
http://wiki.mxunit.org/display/default/Continuous+Integration+--+Running+tests+with+Jenkins
Essentially this is set up as an ant task in Jenkins, which executes the MXUnit tests and reports back the results.
We're not doing fully continuos integration, but we have a process which automates some of the drudgery of our builds:
replace the site's application.cf(m|c) with one that tells users that the app is being deployed (we had QA staff raising defects that were due to re-deployments)
read a database manifest XML which lists all SQL scripts which make up the current release. We concatenate the scripts into a single upgrade script, suitable for shipping
execute the SQL script against the server's DB, noting any errors. The concatenation process also adds a line of SQL after each imported script that white to a runlog table, so we can see what ran, how long it took and which build it was associated with. If you're looking to replicate this step, take a look at Liquibase
deploy the latest code
make an http call to a ?reset=true type URL to tell the app to re-initialize
execute any tests
The build is requested manually through the build servers we have, but you click a button, make tea and it's done.
We've just extended the above to cope with multiple servers in a cluster and it ticks along nicely. I think the above suggestion of using the Jenkins SVN plugin to automate the process sounds like the way to go.

Deployment of files other than source code

I am starting to prepare a roadmap for our release process. We are at present using tortoise svn and ant for building source. I am considering implementing continuous integration and would like to know right direction for the choices below:
Firstly, the present process is such that a developer would work on a file, commits that file directly to repo. Others would run the tortoise update command to pull in the required changes. The same process is followed on the build server where in would update the source code, build and then deploy to qa and production servers. However, this process lacks control of repo since during an update, unwanted code is also pulled in case two developers worked on the same file fixing two different issues. One approved by qa and other rejected. How can i overcome this scenario.
Secondly, apart from source we have a bunch of other files such as xml files, css,js etc . How do i automate deployment of these files? I have configured cruisecontrol on my local machine and it works fine when it comes to executing a build but now sure how to handle other files since updating those files in production seems risky and error prone. Any suggestion in this would be really helpful.
You could try integrating PowerShell with CruiseControl, our team has CC fire off the build process and then PowerShell to copy the resulting project files (code and others) to production or a test site or wherever.
I'd suggest to deal with the lack of repository control that you create a candidate Branch off your Trunk and designate that as your Integration code. Once it's settled and necessary changes have been committed or pulled, promote it to Regression for further testing. Then once that testing is successful, promote it to Production.
In this process your developers wouldn't be committing to Production directly, but instead through an iterative process a new production repository will result, whose changes can then be reintegrated into Trunk so the process can start anew for the next release.