I am trying to create folder in Informatica using PMREP command.
Syntax below.
Can i call this command (pmrep) directly from command task or I have to put this command in Shell script and call this shell script in command task?
Also, I do not see any option to pass user credentials in this command, will it automatically take these details from the parent workflow where I add this in command task?
pmrep createFolder -n [-d folder_description] [-o owner_name] [-a owner_security_domain] [-s shared_folder] [-p permissions] [-f active | frozendeploy | frozennodeploy]
Above pmrep command is to create a folder in Informatica repository which would be one time creation. There is no point in creating it through command task, because when you execute the workflow multiple times, command task tries to create the folder multiple times in Informatica repository which leads to the failure of the workflows. If you are creating a folder in Informatica repository, create it through terminal.
If you are creating folder in the server use below command in command task,
mkdir folder_name
Please mention if it's for Informatica repository folder or a directory in your machine, so that can help you according to your requirement.
Related
I am new to Informatica Power Center.
My task is to trigger/start Workflow B right after when Workflow A just completed using infacmd command.
Suggestion is after all session in workflow a add a command task with "infacmd.sh startworkflow" to start the workflow b with all the options.
I've tried some guides but the version was too old. I'm using Informatica 10.1.1.
Thank you.
Fro the command task you can use the following command.
pmcmd startworkflow -sv $INFA_SERVICE -d $INFA_DOMAIN -u $INFRAREPO_USERID -p $INFRAREPO_PASSWD -f $INFA_FOLDER -wait $INFA_WORKFLOW
Replace the variable according to your domain/folder/workflow name etc.
Otherwise, you can create a shell script from where you have to call the workflow using the above command and call the shell script from your last session 'Post Session success command'
Consider creating a Command task that will touch a file and having Workflow B started together with Workflow A, with a Wait task that will wait for a file, and delete the file as a last step.
This way you don't need to invoke the pmcmd with hardcoded username and password.
I need to update /etc/hosts for all instances in my EMR cluster (EMR AMI 4.3).
The whole script is nothing more than:
#!/bin/bash
echo -e 'ip1 uri1' >> /etc/hosts
echo -e 'ip2 uri2' >> /etc/hosts
...
This script needs to run as sudo or it fails.
From here: https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-bootstrap.html#bootstrapUses
Bootstrap actions execute as the Hadoop user by default. You can execute a bootstrap action with root privileges by using sudo.
Great news... but I can't figure out how to do this, and I can't find an example.
I've tried a bunch of things... including...
running as Hadoop and adding 'sudo' to each of the 'echo' statements in the script
using a shell script to copy and chmod the above ('echo' statements with no 'sudo') and running local copy using run-if bootstrap that calls 1=1 sudo bash /home/hadoop/myDir/myScript.sh
hard coding the whole script as a one-liner into a run-if bootstrap action
I consistently get:
On the master instance (i-xxx), bootstrap action 2 returned a non-zero return code
If i check the logs for the "Setup hadoop debugging" step, there's nothing there.
From here: https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-overview.html#emr-overview-cluster-lifecycle
summary emr setup (in order):
provisions ec2 instances
runs bootstrap actions
installs native applications... like hadoop, spark, etc.
So it seems like there's some risk that since I'm mucking around as user Hadoop before hadoop is installed, I could be messing something up there, but I can't imagine what.
I think it must be that my script isn't running as 'sudo' and it's failing to update /etc/hosts.
My question... how can I use bootstrap actions (or something else) on EMR to run a simple shell script as sudo? ...specifically to update /etc/hosts?
I've not had problems using sudo from within a shell script run as an EMR bootstrap action, so it should work. You can test that it works with a simple script that simply does "sudo ls /root".
Your script is trying to append to /etc/hosts by redirecting stdout with:
sudo echo -e 'ip1 uri1' >> /etc/hosts
The problem here is that while the echo is run with sudo, the redirection (>>) is not. It's run by the underlying hadoop user, who does not have permission to write to /etc/hosts. The fix is:
sudo sh -c 'echo -e "ip1 uri1" >> /etc/hosts'
This runs the entire command, including the stdout redirection, in a shell with sudo.
I’m setting up a patch process for EC2 servers running a web application.
I need to build an automated process that installs system updates but, reverts back to the last working ec2 instance if the web application fails a status check.
I’ve been trying to do this using an Automation Document in EC2 Systems Manager that performs the following steps:
Stop EC2 instance
Create AMI from instance
Launch new instance from newly created AMI
Run updates
Run status check on web application
If check fails, stop new instance and restart original instance
The Automation Document runs the first 5 steps successfully, but I can't identify how to trigger step 6? Can I do this within the Automation Document? What output would I be able to call from step 5? If it uses aws:runCommand, should the runCommand trigger a new automation document or another AWS tool?
I tried the following to solve this, which more or less worked:
Included an aws:runCommand action in the automation document
This ran the DocumentName "AWS-RunShellScript" with the following parameters:
Downloaded the script from s3:
sudo aws s3 cp s3://path/to/s3/script.sh /tmp/script.sh
Set the file to executable:
chmod +x /tmp/script.sh
Executed the script using variables set in, or generated by the automation document
bash /tmp/script.sh -o {{VAR1}} -n {{VAR2}} -i {{VAR3}} -l {{VAR4}} -w {{VAR5}}
The script included the following getopts command to set the inputted variables:
while getopts o:n:i:l:w: option
do
case "${option}"
in
n) VAR1=${OPTARG};;
o) VAR2=${OPTARG};;
i) VAR3=${OPTARG};;
l) VAR4=${OPTARG};;
w) VAR5=${OPTARG};;
esac
done
The bash script used the variables to run the status check, and roll back to last working instance if it failed.
I have a crontab that fires a PHP script that runs the AWS CLI command "aws ec2 create-snapshot".
When I run the script via the command line the php script completes successfully with the aws command returning a JSON string to PHP. But when I setup a crontab to run the php script the aws command doesn't return anything.
The crontab is running as the same user as when I run the PHP script on the command line myself, so I am a bit stumped?
I had the same problem with running a ruby script (ruby script.rb).
I replace ruby by its full path (/sources/ruby-2.0.0-p195/ruby) and it worked.
in you case, replace "aws" by its full path. to find it:
find / -name "aws"
The reason it's necessary to specify the full path to the aws command is because cron by default runs with a very limited environment. I ran into this problem as well, and debugged it by adding this to the cron script:
set | sort > /tmp/environment.txt
I then ran the script via cron and via command line (renaming the environment file between runs) and compared them. This led me to see that I needed to set both the PATH and the AWS_DEFAULT_REGION environment variables. After doing this the script worked just fine.
I want to deploy war from Jenkins to Cloud.
Could you please let me know how to deploy war file from Jenkins on my local to AWS Bean Stalk ?
I tried using a Jenkins post-process plugin to copy the artifact to S3, but I get the following error:
ERROR: Failed to upload files java.io.IOException: put Destination [bucketName=https:, objectName=/s3-eu-west-1.amazonaws.com/bucketname/test.war]:
com.amazonaws.AmazonClientException: Unable to execute HTTP request: Connect to s3.amazonaws.com/s3.amazonaws.com/ timed out at hudson.plugins.s3.S3Profile.upload(S3Profile.java:85) at hudson.plugins.s3.S3BucketPublisher.perform(S3BucketPublisher.java:143)
Some work has been done on this.
http://purelyinstinctual.com/2013/03/18/automated-deployment-to-amazon-elastic-beanstalk-using-jenkins-on-ec2-part-2-guide/
Basically, this is just adding a post-build task to run the standard command line deployment scripts.
From the referenced page, assuming you have the post-build task plugin on Jenkins and the AWS command line tools installed:
STEP 1
In a Jenkins job configuration screen, add a “Post-build action” and choose the plugin “Publish artifacts to S3 bucket”, specify the Source (in our case, we use Maven so the source is target/.war and destination is your S3 bucket name)
STEP 2
Then, add a “Post-build task” (if you don’t have it, this is a plugin in Maven repo) to the same section above (“Post-build Actions”) and drag it below the “Publish artifacts to S3 bucket”. This is important that we want to make sure the war file is uploaded to S3 before proceeding with the scripts.
In the Post-build task portion, make sure you check the box “Run script only if all previous steps were successful”
In the script text area, put in the path of the script to automate the deployment (described in step 3 below). For us, we put something like this:
<path_to_script_file>/deploy.sh "$VERSION_NUMBER" "$VERSION_DESCRIPTION"
The $VERSION_NUMBER and $VERSION_DESCRIPTION are Jenkins’ build parameters and must be specified when a deployment is triggered. Both variables will be used for AEB deployment
STEP 3
The script
#!/bin/sh
export AWS_CREDENTIAL_FILE=<path_to_your aws.key file>
export PATH=$PATH:<path to bin file inside the "api" folder inside the AEB Command line tool (A)>
export PATH=$PATH:<path to root folder of s3cmd (B)>
//get the current time and append to the name of .war file that's being deployed.
//This will create a unique identifier for each .war file and allow us to rollback easily.
current_time=$(date +"%Y%m%d%H%M%S")
original_file="app.war"
new_file="app_$current_time.war"
//Rename the deployed war file with the new name.
s3cmd mv "s3://<your S3 bucket>/$original_file" "s3://<your S3 bucket>/$new_file"
//Create application version in AEB and link it with the renamed WAR file
elastic-beanstalk-create-application-version -a "Hoiio App" -l "$1" -d "$2" -s "<your S3 bucket>/$new_file"