Syntax for PRESOURCE call using Protocol bridge agent as source in MQFTE - websphere-mq-fte

What is the syntax for calling a shell script using presource call in mqfte protocol bridge agent as source ( Linux platform is used for FTE local as well as sftp server to which the protocol bridge agent is connecting.) Below is the error i am getting. BFGCR0003E: A request has been made for the agent to call the command ''10.350.81.70:/testing/Sample_presrc.bat''. The agent's command path '10.350.81.70:/testing/' does not define a path to this command. Only commands whose path is on the agent's command path can be run.
Note. The script created is batch why because, the SFTP server is based on wintel platform ( where the script will be placed) but the file system is linux. And the same way the fte local on which the protocol bridge agent installed also linux platform.
Thanks,
Vasuki P

Given the error I think your syntax is just fine. The 'command path' the error refers to maps to the 'commandPath' property of the source agent's agent.properties file documented here:
http://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.wmqfte.doc/properties.htm
The commandPath is a control which lets you restrict which commands can be run using presource calls and similar - it's a measure to help prevent the wrong applications being called, or worse someone who has compromised your MFT system being able to run any command they please.
':' is the path separator on Unix, so your commandPath is set up to run commands in a directory named '10.350.81.70', and another directory named '/testing/'. The command needs to be on a mounted filesystem on the machine the source agent is running on for a presource call. If 10.350.81.70 is the source agent's host, then alter the commandPath for that agent to '/testing' and alter the syntax of your createTransfer command to invoke '/testing/Sample_presrc.bat'

Related

How to convert the client based file path to the server based when we create symlink within NFS Drive?

I am developing NFS Server modules(ProcedureSYMLINK) and met path conversion issue when creating symlink.
After I run the NFS server service, I connected the server and mount as a drive on Linux Client.
And run the below command to create symbolic link within NFS Drive using full path and debug it on server side. But the target file path is not given on server path based.
ln -s /mnt/nfs/1.bin /mnt/nfs/symlink/1.lnk
Let me give you an example to clarify my question.
The base directory path on NFS Server is /usr/nfs.
So I executed below command on the Server.
./nfs_server /usr/nfs
And I mounted the NFS on Ubuntu Client using the below command.
sudo mount -t nfs -o vers=3,proto=tcp,port=2049 192.168.1.37:/usr/nfs /mnt/nfs
After that, I created the symbolic link.
ln -s /mnt/nfs/1.bin /mnt/nfs/symlink/1.lnk
/mnt/nfs/1.bin : Target Path
/mnt/nfs/symlink/1.lnk : Symlink Path
Once I enter above command and tried to debug on the server side.
in the ProcedureSYMLINK function I could see the status of variables.
I could safely get the Symlink Path which is based on Server, but the target path is not based on Server.
Target path was still /mnt/nfs/1.bin
Actually, there is no way to get the mount base path of the NFS client (/mnt/nfs) on Server.. right?
If I know the base path(/mnt/nfs), I can calculate the target file path on Server based, but I don't know the base path of the Client.
target file path may be /usr/nfs/1.bin. but no way to calculate the path like this.
Does anyone know?
I am using NFS v3
Two points which hopefully will help answer your questions:
In NFS, symlinks are always resolved by the client. From the perspective of the NFS server, a symlink is just a special type of file but the content (as in, where the symlink points to) is just considered an opaque blob.
The mountpoint on the client (/mnt/nfs in your example) is purely a client-side matter, there is no provision in the NFS protocol for letting the server know it.

When my AMI instance launches, 99-disable-network-config.cfg is created. What is creating it?

My workflow:
Packer -> Kick off a kickstart build of my system on vCloud
Packer -> Export VM as OVA
Packer -> Upload OVA to s3 bucket in AWS
Packer -> Ask AWS to convert my OVA to AMI
Manual -> Launch AMI instance
When I launch my AMI instance, I instantiate cloud-init early on, after networking service has started. But cloud-init doesn't configure my interfaces because this file is present:
/etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
This file tells cloud-init to bypass the network configuration and the end result is that my instance is unreachable.
I know cloud-init runs because it sets my hostname in another file that I have defined in my custom distro script. The host gets its hostname from AWS during boot so I know the NIC is functional! Also, I can see that it gets an IP via DHCP in the /var/log/messages)
Config excerpt:
system_info:
# This will affect which distro class gets used
distro: cent1
network:
renderers: ['cent1']
Basically, the distro script is run (cloud-init/cloudinit/distros/cent1.py) but the renderers script isn't (cloud-init/cloudinit/net/cent1.py).
I have searched through the packer code base and the cloud-init code base as well as my own code base and nowhere is the actual creation or moving of such a file present. The only place the file is mentioned is a comment in the cloud-init source. The following comment is found in cloudinit/distros/debian.py (my instance is CentOS but the comment explains what the presence of this file does):
34 # To disable cloud-init's network configuration capabilities, write a file
35 # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
36 # network: {config: disabled}
When I stop my instance and mount its volume on another system, I can see the 99-disable-network-config.cfg file is present. Even more confusing is that the top of the file says:
[root#host ~]# cat /root/drop1-3/etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
# Automatically generated by the vm import process
network: {config: disabled}
When I do a google search, I see other references to the string # Automatically generated by the vm import process.
For example, this post has such a reference.
Another bit of info. If I manually remove 99-disable-network-config.cfg in cloud-init util.py file just before it checks for the 99-disable-network-config.cfg file, everything works exactly as it should - networking gets configured and I can SSH to my instance.
I'm not putting the 99-disable-network-config.cfg file there. I don't see anything in packer's source suggesting that it's putting that file there. I don't see anything in cloud-init's source suggesting that it's putting that file there.
So the question is, where is that file coming from? I already have a work-around but I'd like to understand the root cause. I have not been able to find the root cause of why that file is present.
(Sorry this is so long-winded but I've been staring at this problems for days and have zero solutions other than the heavy-handed workaround.)

appcfg.py not working in command line

I'm just having a bit of trouble understanding why this command:
>appcfg.py -A adept-box-109804 update app.yaml
as given by the Try Google App Engine Now page does not work. I have downloaded the App Engine SDK for Python, and have Path set up to point to the location of appcfg.py, but running appcfg.py in my projects root directory does not work in the command line. I either have to navigate to the folder containing appcfg.py and do
>python appcfg.py help
or do
>python "C:\Program Files (x86)\Google\google_appengine\appcfg.py" help
to get a command to work from anywhere. I used the latter method to deploy my test app, but was just wondering if someone could explain why the command as given by the simple Google tutorial did not do anything. I also checked to make sure that .py files are automatically opened with the Python 2.7 interpreter, such that a file hello.py will be executed in the command line by simply typing
>hello.py
and it will output its print statement. On the other hand, using appcfg.py in a similar manner gives the same output no matter the arguments (please note I truncated the output, but rest assured that they are identical no matter the arguments:
C:\>appcfg.py help backends
Usage: appcfg.py [options] <action>
Action must be one of:
backends: Perform a backend action.
backends configure: Reconfigure a backend without stopping it.
backends delete: Delete a backend.
backends list: List all backends configured for the app.
backends rollback: Roll back an update of a backend.
backends start: Start a backend.
backends stop: Stop a backend.
backends update: Update one or more backends.
create_bulkloader_config: Create a bulkloader.yaml from a running application.
cron_info: Display information about cron jobs.
delete_version: Delete the specified version for an app.
download_app: Download a previously-uploaded app.
download_data: Download entities from datastore.
help: Print help for a specific action.
list_versions: List all uploaded versions for an app.
request_logs: Write request logs in Apache common log format.
resource_limits_info: Get the resource limits.
rollback: Rollback an in-progress update.
set_default_version: Set the default (serving) version.
start_module_version: Start a module version.
stop_module_version: Stop a module version.
update: Create or update an app version.
update_cron: Update application cron definitions.
update_dispatch: Update application dispatch definitions.
update_dos: Update application dos definitions.
update_indexes: Update application indexes.
update_queues: Update application task queue definitions.
upload_data: Upload data records to datastore.
vacuum_indexes: Delete unused indexes from application.
Use 'help <action>' for a detailed description.
C:\>appcfg.py help update
Usage: appcfg.py [options] <action>
Action must be one of:
backends: Perform a backend action.
backends configure: Reconfigure a backend without stopping it.
backends delete: Delete a backend.
backends list: List all backends configured for the app.
backends rollback: Roll back an update of a backend.
backends start: Start a backend.
backends stop: Stop a backend.
backends update: Update one or more backends.
create_bulkloader_config: Create a bulkloader.yaml from a running application.
cron_info: Display information about cron jobs.
delete_version: Delete the specified version for an app.
download_app: Download a previously-uploaded app.
download_data: Download entities from datastore.
help: Print help for a specific action.
list_versions: List all uploaded versions for an app.
request_logs: Write request logs in Apache common log format.
resource_limits_info: Get the resource limits.
rollback: Rollback an in-progress update.
set_default_version: Set the default (serving) version.
start_module_version: Start a module version.
stop_module_version: Stop a module version.
update: Create or update an app version.
update_cron: Update application cron definitions.
update_dispatch: Update application dispatch definitions.
update_dos: Update application dos definitions.
update_indexes: Update application indexes.
update_queues: Update application task queue definitions.
upload_data: Upload data records to datastore.
vacuum_indexes: Delete unused indexes from application.
Use 'help <action>' for a detailed description.
I finally tracked down the real reason, and it wasn't a bug with the AppEngine SDK. Rather it was with my Python interpreter, as I noticed it wasn't accepting arguments for any .py files. It turned out to be a registry error, located at [HKEY_CLASSES_ROOT\Applications\python.exe\shell\open\command] where I had to change the value from "C:\Python27\python.exe" "%1" to "C:\Python27\python.exe" "%1" %*
How this happened, whether it be the Python 2.7 installer, or maybe the AppEngine SDK, I'm not sure though.
Your confusion probably stems from mixing up 2 possible invocations styles:
python appcfg.py ...
appcfg.py ...
The 1st one can't make use of the fact that the location of the appcfg.py is in the path, it is just an argument to the python executable, which can not locate the appcfg.py file unless either:
it finds it in the current directory
the appcfg.py file is specified using a full path or a path relative to the current working directory from which python is invoked
This is the reason for which your 2nd and 3rd commands don't work as you'd expect. Using the 2nd invocation style instead should work if the location of the appcfg.py is in the path - just as your last command invocation does.
Key point to remember: the path configuration applies to the command executable only, not to its arguments (which BTW each executable may process as it wishes, some executables may combine arguments with the path configuration to obtain location of files).
Similarly appcfg.py itself (once successfully invoked using either of the 2 invocation styles) needs to be able to locate your app.yaml file specified as argument. It cannot do so unless either:
it finds it in the current directory
the app.yaml file (or its directory) is specified using a full path or a path relative to the current working directory from which appcfg.py is invoked
I suspect appcfg.py's inability to locate your app.yaml file may be the reason for which the 1st command you mentioned didn't work. If not you should provide details about the failure.
Regarding why the output of your last command is identical regardless of the arguments, I'm not sure, it could be a bug in the windows version of the SDK. In linux the output is different:
> appcfg.py help backends
Usage: appcfg.py [options] backends <directory> <action>
Perform a backend action.
The 'backends' command will perform a backends action.
Options:
-h, --help Show the help message and exit.
-q, --quiet Print errors only.
-v, --verbose Print info level logs.
--noisy Print all logs.
-s SERVER, --server=SERVER
The App Engine server.
-e EMAIL, --email=EMAIL
The username to use. Will prompt if omitted.
-H HOST, --host=HOST Overrides the Host header sent with all RPCs.
--no_cookies Do not save authentication cookies to local disk.
--skip_sdk_update_check
Do not check for SDK updates.
-A APP_ID, --application=APP_ID
Set the application, overriding the application value
from app.yaml file.
-M MODULE, --module=MODULE
Set the module, overriding the module value from
app.yaml.
-V VERSION, --version=VERSION
Set the (major) version, overriding the version value
from app.yaml file.
-r RUNTIME, --runtime=RUNTIME
Override runtime from app.yaml file.
-E NAME:VALUE, --env_variable=NAME:VALUE
Set an environment variable, potentially overriding an
env_variable value from app.yaml file (flag may be
repeated to set multiple variables).
-R, --allow_any_runtime
Do not validate the runtime in app.yaml
--oauth2 Ignored (OAuth2 is the default).
--oauth2_refresh_token=OAUTH2_REFRESH_TOKEN
An existing OAuth2 refresh token to use. Will not
attempt interactive OAuth approval.
--oauth2_access_token=OAUTH2_ACCESS_TOKEN
An existing OAuth2 access token to use. Will not
attempt interactive OAuth approval.
--authenticate_service_account
Authenticate using the default service account for the
Google Compute Engine VM in which appcfg is being
called
--noauth_local_webserver
Do not run a local web server to handle redirects
during OAuth authorization.
I had this problem, and is deepened in local variable python version that different from app engine python version.
So the solution is just to add before the script the current python version location:
C:\Python27\python.exe "C:\Program Files (x86)\Google\google_appengine\appcfg.py"
And it just return to work well.

Bamboo SCP plug-in: how to find directory

I am trying to upload a file to a remote server using the SCP task. I have OpenSSH configured on the remote server in question, and I am using an Amazon EC2 instance running Windows Server 2008 R2 with Cygwin to run the Bamboo build server.
My question is regarding finding the directory I wish to use. I want to upload the entire contents of C:\doc using SCP. The documentation notes that I must use the local path relative to the Bamboo working directory rather than an absolute directory name.
I found by running pwd during the build plan that the working directory is /cygdrive/c/build-dir/CDP-DOC-JOB1. So to get to doc, I can run cd ../../doc. However, when I set my working directory under the SCP configuration as ../../doc/** (using this pattern matching guide), I get the message There were no files to upload. in the log.
C:\doc contains subfolders as well as a textfile in the root directory.
Here is my SCP task configuration:
Here is a look from cygwin at my directory:
You may add a first "script" task running a Windows shell, that copies everything from C:\doc to some local directory, and then run the scp task to copy the content of this new directory onto your remote server
mkdir doc
xcopy c:\doc .\doc /E /F
Then the pattern for copy should be /doc/**

Building project from cron task

When I build project from terminal by using 'xcodebuild' command I succeed, but when I try to do run same script from cron task I receive error
"Code Sign error: The identity '****' doesn't match any valid certificate/private key pair in the default keychain"
I think problem is in settings and permissions of crontab utility, it seems crontab does not see my keychain
Can anyone provide me terminal command how to make my keychain visible for crontab
I encountered a similar issue with trying to build nightly via cron. The only resolution I found was to create a plist in /Library/LaunchDaemons/ and load it via launchctl. The key necessary is "SessionCreate" otherwise you will quickly run in to problems similar to what was encountered with trying to use cron -- namely that your user login.keychain is not available to the process. "SessionCreate" is similar to "su -l" in that (as far as I understand) it simulates a login and thus default keychains you expect will be available; otherwise, you are stuck with only the System keychain despite the task running as your user.
I found the answers (though not the top answer currently) here useful in troublw shooting this issue: Missing certificates and keys in the keychain while using Jenkins/Hudson as Continuous Integration for iOS and Mac development
You execute your cron job with which account ?
most probably the problem !!
You can add
echo `whoami`
at the beginning of your script to see with which user the script is launched.
Also when a Bash script is launched from cron, it don't use the same environment variable (non login shell) as when you launch it as a user.
When the script launches from cron, it doesn't load your $HOME/.profile (or .bash_profile). Anything you run from cron has to be 100% self-sufficient in terms of it's environment. I'd suggest you make yourself a file called something like "set_build_env.sh" It should contain everything from your .profile that you need to build, such as $PATH, $HOME, $CLASSPATH etc. Then in your build script, load set_build_env.sh using the dot notation or source cmd as ericc said. You should also remove the build-specific lines from your.profile and then source set_build_env from there too so only one place to maintain. Example:
source /home/dmitry/set_build_env.sh #absolute path
. /home/dmitry/set_build_env.sh #dot-space notation same as "source"