What does retain persistent mapping variable value mean in informatica when importing a mapping(XML file) in the repository - informatica

I'm trying to import a mapping from a folder to my DEV environment. I already have the same named mapping in DEV environment so I did replace in conflicts window that popped up. After that it is asking me to check a box that says retain persistent mapping variable value. Does it mean:
1: It will retain persistent variable values from the import file XML that I'm trying to import in DEV.or
2: It will retain persistent variable values from the earlier same named mapping that I have already in the DEV env repository.
which is it? Please help
I tried the above that I am mentioning

If you check it Infa will retain persistent variable values in target repository/folder.
For example, if you are migrating a mapping which has a sequence generator with initial value as 5000 in production and 10 in dev, while migrating if you check the option retain persistent mapping variable, after migration from Prod to Dev, value of seq gen initial value will be 10. Because it retained the value in Dev.
This is mostly used in case of lower environment to higher environment migration. In prod, if value is higher, its always a good idea to retain that higher and correct value and we check this option.
In your scenario i would say not to retain dev values because it will not be consistent with prod.

Related

How to Exclude Application Definition from Import in Oracle Apex

I need to have different values in substitution strings in DEV and PROD. How do I prevent overwriting the substitution strings when updating PROD. DEV and PROD are in separate databases.
I don't see how to exclude the application definitions in build options.
Is there a better way to meet this requirement?
Thanks
The way I see it, substitution strings are application items defined as constants. Only use them for strings that are always the same in any deployment instance of the app. As soon as the value needs to be changeable (for example dev has different value than prod), use application items instead.
If you insist on doing this with build options then this is an option:
Set the values of the application items using a computation or an application process (this is for for production).
Create a 2nd set of computations or an app process with a sequence higher than the sequence of the one above (so this will override the original values) and set a build option on those (exclude on export).
That way, when you export the app, only the first set of computations / app process will be included.
However, my preference is to configure this in the database and have a settings table that has a record indicating the status of the environment (prod/dev/stage/uat) and store the strings in a custom messages table (one record per app status/application item) . In an application process or computation get the value of the application items. The reason I prefer this is that the app doesn't need to know if it is dev or prod, but the database should. This option has a couple of challenges if the same database and schema is used for prod and dev.

How do you reference a dynamic terraform output in application code?

I'm creating a dynamodb table using terraform and the name attribute of the table looks something like this...
name = "${var.service}-${var.environment}-Item-table"
Depending on the environment the name could be items-service-dev-Item-table or items-service-prod-Item-table. In my application code (JS) I obviously need to know the name of the table in order to interact with it but the dynamic nature makes it trickier.
I've considered going down the route of environment variables that are referenced by both the terraform and application code, but it seems messy. What's the best practice approach for handling something like this?
Is terraform also deploying your application code? Usually you would have Terraform inject that value as an environment variable in the application it deploys.
If that's not possible, store the value in AWS Parameter Store.

Better way of handling multiple environment variable in AWS Codebuild

I have a AWS Codebuild project connected to my Github account. Within my github I have separate branches for each environment.
I have in total 4 environments (and by that relationship, 4 github branches) currently: dev, qa, customer1-poc, customer2-prod.
Now I use multitude of environment variables within my project and initially I was setting up these env vars within the Codebuild project under Environment > Environment variables section. So ideally per env there are 4 env vars which are distinguished using the env name.
For example if there is an env var called apiKey it is saved in codebuild 4 times by the name
apiKey_dev
apiKey_qa
apiKey_customer1poc
apiKey_customer2prod
You get the idea. Same goes for other env vars which need to be different across all envs.
These env vars are read from the buildspec file and passed on to serverless.yml file.
Now the issue is as I keep creating new environments (like more poc, prod envs) I need to keep replicating the set of env vars for each env and its getting tedious.
Is there some way I can save these env vars outside the Codebuild project which can then be passed on to the Lambda function upon successful builds?
CodeBuild has native integration with Parameter store:
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec.env.parameter-store
In Paramter store, you can keep your variable as a json with name like /config/prod":
... then retrieve it in CodeBuild and parse via 'jq' 2. This way, all the environment specific variables are in one place. If you go this way, make sure to encrypt the Param Store variable with a KMS key if it contains secrets. Also check AWS Secrets Manager.

Determining Server Context (Workspace Server vs Stored Process Server)

I'd like to conditionally execute code depending on whether I'm in a Workspace or Stored Process server context.
I could do this by testing the existence of an automatic STP variable, eg _metaperson, but this wouldn't be very robust.
Assuming I already have a metadata connection, how best to check my server type?
Bulletproof way would be to create a macro variable that is initialised by the autoexec or config in the required server context.
Of course this would only work if you have access and permission to modify files stored in sas configuration folder.
Hurrah - there is, in fact, an automatic variable that does just this - sysprocessmode (available since 9.4)
Extract from documentation:
SYSPROCESSMODE is a read-only automatic macro variable, which contains
the name of the current SAS session run mode or server type, such as
the following:
SAS DMS Session
SAS Batch Mode
SAS Line Mode
SAS/CONNECT Session
SAS Share Server
SAS IntrNet Server
SAS Workspace Server
SAS Pooled Workspace Server
SAS Stored Process Server
SAS OLAP Server
SAS Table Server
SAS Metadata Server
Being an automatic variable, it is of course read only:
The stored process server will preset the _PROGRAM macro variable with the program that is running. I do not know if this macro variable is read-only in the STP execution context.
But as you say, a program in the workspace context could set a _PROGRAM macro variable.
For workspace sessions look for _CLIENTAPP macro variable.
I am unaware of a function to call or immutable system option that can be examined. Try PROC OPTIONS in both contexts and see what pops out. An OBJECTSERVERPARMS value, if reported, is a list of name=value pairs. One of them would be server= and may differentiate.

CronJob, django and environment variables

I have built an application for django on Openshift v3 PRO with the django-ex template. It works great. I'm using POSTGRESQL with persistent storage.
I need a scheduled cron job to fire every hour to run some django management commands. I'm using the CronJob pod for this.
My problem is this: I need to create the CronJob job with the same environment variables that the django pod was created with (DATABASE_, DJANGO_, and others), but don't see an easy way to do this.
Any help would appreciate it.
You should be able to include a list of environment variables to set as part of the containers definition in the template spec for the job. I can't properly extract the resource definition for a CronJob using oc explain in OpenShift 3.6 because of the way it is registered, but I would expect the field to be similar to:
CronJob.spec.jobTemplate.spec.template.spec.containers.env
RESOURCE: env <[]Object>
DESCRIPTION:
List of environment variables to set in the container. Cannot be updated.
EnvVar represents an environment variable present in a Container.
FIELDS:
name <string> -required-
Name of the environment variable. Must be a C_IDENTIFIER.
value <string>
Variable references $(VAR_NAME) are expanded using the previous defined
environment variables in the container and any service environment
variables. If a variable cannot be resolved, the reference in the input
string will be unchanged. The $(VAR_NAME) syntax can be escaped with a
double $$, ie: $$(VAR_NAME). Escaped references will never be expanded,
regardless of whether the variable exists or not. Defaults to "".
valueFrom <Object>
Source for the environment variable's value. Cannot be used if value is not
empty.