appcfg.py not working in command line - python-2.7

I'm just having a bit of trouble understanding why this command:
>appcfg.py -A adept-box-109804 update app.yaml
as given by the Try Google App Engine Now page does not work. I have downloaded the App Engine SDK for Python, and have Path set up to point to the location of appcfg.py, but running appcfg.py in my projects root directory does not work in the command line. I either have to navigate to the folder containing appcfg.py and do
>python appcfg.py help
or do
>python "C:\Program Files (x86)\Google\google_appengine\appcfg.py" help
to get a command to work from anywhere. I used the latter method to deploy my test app, but was just wondering if someone could explain why the command as given by the simple Google tutorial did not do anything. I also checked to make sure that .py files are automatically opened with the Python 2.7 interpreter, such that a file hello.py will be executed in the command line by simply typing
>hello.py
and it will output its print statement. On the other hand, using appcfg.py in a similar manner gives the same output no matter the arguments (please note I truncated the output, but rest assured that they are identical no matter the arguments:
C:\>appcfg.py help backends
Usage: appcfg.py [options] <action>
Action must be one of:
backends: Perform a backend action.
backends configure: Reconfigure a backend without stopping it.
backends delete: Delete a backend.
backends list: List all backends configured for the app.
backends rollback: Roll back an update of a backend.
backends start: Start a backend.
backends stop: Stop a backend.
backends update: Update one or more backends.
create_bulkloader_config: Create a bulkloader.yaml from a running application.
cron_info: Display information about cron jobs.
delete_version: Delete the specified version for an app.
download_app: Download a previously-uploaded app.
download_data: Download entities from datastore.
help: Print help for a specific action.
list_versions: List all uploaded versions for an app.
request_logs: Write request logs in Apache common log format.
resource_limits_info: Get the resource limits.
rollback: Rollback an in-progress update.
set_default_version: Set the default (serving) version.
start_module_version: Start a module version.
stop_module_version: Stop a module version.
update: Create or update an app version.
update_cron: Update application cron definitions.
update_dispatch: Update application dispatch definitions.
update_dos: Update application dos definitions.
update_indexes: Update application indexes.
update_queues: Update application task queue definitions.
upload_data: Upload data records to datastore.
vacuum_indexes: Delete unused indexes from application.
Use 'help <action>' for a detailed description.
C:\>appcfg.py help update
Usage: appcfg.py [options] <action>
Action must be one of:
backends: Perform a backend action.
backends configure: Reconfigure a backend without stopping it.
backends delete: Delete a backend.
backends list: List all backends configured for the app.
backends rollback: Roll back an update of a backend.
backends start: Start a backend.
backends stop: Stop a backend.
backends update: Update one or more backends.
create_bulkloader_config: Create a bulkloader.yaml from a running application.
cron_info: Display information about cron jobs.
delete_version: Delete the specified version for an app.
download_app: Download a previously-uploaded app.
download_data: Download entities from datastore.
help: Print help for a specific action.
list_versions: List all uploaded versions for an app.
request_logs: Write request logs in Apache common log format.
resource_limits_info: Get the resource limits.
rollback: Rollback an in-progress update.
set_default_version: Set the default (serving) version.
start_module_version: Start a module version.
stop_module_version: Stop a module version.
update: Create or update an app version.
update_cron: Update application cron definitions.
update_dispatch: Update application dispatch definitions.
update_dos: Update application dos definitions.
update_indexes: Update application indexes.
update_queues: Update application task queue definitions.
upload_data: Upload data records to datastore.
vacuum_indexes: Delete unused indexes from application.
Use 'help <action>' for a detailed description.

I finally tracked down the real reason, and it wasn't a bug with the AppEngine SDK. Rather it was with my Python interpreter, as I noticed it wasn't accepting arguments for any .py files. It turned out to be a registry error, located at [HKEY_CLASSES_ROOT\Applications\python.exe\shell\open\command] where I had to change the value from "C:\Python27\python.exe" "%1" to "C:\Python27\python.exe" "%1" %*
How this happened, whether it be the Python 2.7 installer, or maybe the AppEngine SDK, I'm not sure though.

Your confusion probably stems from mixing up 2 possible invocations styles:
python appcfg.py ...
appcfg.py ...
The 1st one can't make use of the fact that the location of the appcfg.py is in the path, it is just an argument to the python executable, which can not locate the appcfg.py file unless either:
it finds it in the current directory
the appcfg.py file is specified using a full path or a path relative to the current working directory from which python is invoked
This is the reason for which your 2nd and 3rd commands don't work as you'd expect. Using the 2nd invocation style instead should work if the location of the appcfg.py is in the path - just as your last command invocation does.
Key point to remember: the path configuration applies to the command executable only, not to its arguments (which BTW each executable may process as it wishes, some executables may combine arguments with the path configuration to obtain location of files).
Similarly appcfg.py itself (once successfully invoked using either of the 2 invocation styles) needs to be able to locate your app.yaml file specified as argument. It cannot do so unless either:
it finds it in the current directory
the app.yaml file (or its directory) is specified using a full path or a path relative to the current working directory from which appcfg.py is invoked
I suspect appcfg.py's inability to locate your app.yaml file may be the reason for which the 1st command you mentioned didn't work. If not you should provide details about the failure.
Regarding why the output of your last command is identical regardless of the arguments, I'm not sure, it could be a bug in the windows version of the SDK. In linux the output is different:
> appcfg.py help backends
Usage: appcfg.py [options] backends <directory> <action>
Perform a backend action.
The 'backends' command will perform a backends action.
Options:
-h, --help Show the help message and exit.
-q, --quiet Print errors only.
-v, --verbose Print info level logs.
--noisy Print all logs.
-s SERVER, --server=SERVER
The App Engine server.
-e EMAIL, --email=EMAIL
The username to use. Will prompt if omitted.
-H HOST, --host=HOST Overrides the Host header sent with all RPCs.
--no_cookies Do not save authentication cookies to local disk.
--skip_sdk_update_check
Do not check for SDK updates.
-A APP_ID, --application=APP_ID
Set the application, overriding the application value
from app.yaml file.
-M MODULE, --module=MODULE
Set the module, overriding the module value from
app.yaml.
-V VERSION, --version=VERSION
Set the (major) version, overriding the version value
from app.yaml file.
-r RUNTIME, --runtime=RUNTIME
Override runtime from app.yaml file.
-E NAME:VALUE, --env_variable=NAME:VALUE
Set an environment variable, potentially overriding an
env_variable value from app.yaml file (flag may be
repeated to set multiple variables).
-R, --allow_any_runtime
Do not validate the runtime in app.yaml
--oauth2 Ignored (OAuth2 is the default).
--oauth2_refresh_token=OAUTH2_REFRESH_TOKEN
An existing OAuth2 refresh token to use. Will not
attempt interactive OAuth approval.
--oauth2_access_token=OAUTH2_ACCESS_TOKEN
An existing OAuth2 access token to use. Will not
attempt interactive OAuth approval.
--authenticate_service_account
Authenticate using the default service account for the
Google Compute Engine VM in which appcfg is being
called
--noauth_local_webserver
Do not run a local web server to handle redirects
during OAuth authorization.

I had this problem, and is deepened in local variable python version that different from app engine python version.
So the solution is just to add before the script the current python version location:
C:\Python27\python.exe "C:\Program Files (x86)\Google\google_appengine\appcfg.py"
And it just return to work well.

Related

wso2 Enabling Notifications for User Operations-template path error

i'm trying to enable notifactions for user operations on wso2-is 5.11 started via docker desktop on a windows11 machine.
somone can explain me what is wrong in that?
P.S. sorry for my english
The problem is in runtime'cause the logs of docker displays the following error:
ERROR {org.wso2.carbon.identity.notification.mgt.NotificationMgtConfigBuilder} - Error while reading email template from location C:\Users\rocco\Documents\WSO2\docker-wso2\conf\is-as-km\repository\template.xml java.io.FileNotFoundException: C:\Users\rocco\Documents\WSO2\docker-wso2\conf\is-as-km\repository\template.xml (No such file or directory)
msg-mgt.properties file:
module.name.1=email
email.subscription.1=userOperation
email.subscription.userOperation.template=
C:\Users\rocco\Documents\WSO2\docker-wso2\conf\is-as-
km\repository\template.xml
#email.subscription.userOperation.salutation=Admin
email.subscription.userOperation.subject=User operation
change information
email.subscription.userOperation.endpoint.1=wso2iamtest
email.subscription.userOperation.endpoint.privateMail.address
=wso2iamtest#outlook.it
#email.subscription.userOperation.endpoint.privateMail
.salutation=Admin private mail
#email.subscription.userOperation.endpoint.privateMail
.subject=
User operation change information to private mail
#
As per the error log, you have configured the wrong path for the email template location.
Even though you use a Windows machine, wso2is is running as a docker instance.
WSO2 IS docker images are based on ubutu/ alpine/ centos OS base images (see the available variants here https://hub.docker.com/r/wso2/wso2is).
So, you also should have used one of them.
When configuring the path for email.subscription.userOperation.template=, we have to give the absolute path inside the WSO2 IS server where the particular template file is located.
In order to find the exact path correctly,
Log in to the docker container giving docker exec -it <container id> bash command
Once you logged in, you can see wso2is-5.11.0 folder natigate to the correct location where your template is placed, and give pwd command. You can get the path to the file. Then append the file name and put the value for email.subscription.userOperation.template= property.
As per the example, I created a file named template.txt.
Added the following content to it.
Hi {username}
This is a test mail to your private mail. The operation occurred was: {operation}.
Moved the created template into the docker container.
Here you can see the path I have added the template file.
According to my case, the msg-mgt.properties file should have the config as follows.
email.subscription.userOperation.template=/home/wso2carbon/wso2is-5.11.0/repository/template.txt

Methods to automate ColdFusion Administrator settings

When working with a ColdFusion server you can access the CFIDE/administrator to set config values, which update the cfusion/lib/ xml files (e.g. neo-runtime.xml, neo-mail.xml, etc.)
I'd like to automate a deployment process that includes setting these administrator values so that I don't have to log in and manually set them for each new box that shares settings. I'm unsure of the best way to go about it.
Some thoughts I had are:
Replacing the full files with ones containing my custom settings. I've done this for local development, but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
A script to read the wddx xml file and replace the attribute values. I'm having trouble finding information about how to do this method.
Has anyone done anything like this before? Or does anyone have any recommendations on how to best go about this?
At one company, we checked all the neo-*.xml files into source control, with a set for each environment Devs only had access to the dev settings and we could deploy a local development environment with all the correct settings for new employees quickly.
but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
You have to keep up with those changes and migrate each environment appropriately.
While I was there, we upgraded from 8 to 9, 9 to 11 and from 11 to 2016. Environments would have to be mixed as it took time to verify the applications worked with each new version of CF. Each server got their correct XML files for that environment and scripts would copy updates as needed. We had something like 55 servers in production running 8 instances each, so this scaled well.
There is a very usefull tool developed by Ortus Solutions for this kind of automatizations called cfconfig that can be installed with their commandbox command line utility. This tool isn't only capable of setting configurations of the administrator: It is also capable of exporting/importing settings to a json file (cfconfig.json). It might be what you need.
Here is the link to their docs
https://cfconfig.ortusbooks.com/introduction/getting-started-guide
CFConfig worked perfectly for my needs. I marked #AndreasRu answer as accepted for introducing me to that tool! I'm just adding this response with some additional detail for posterity.
Install CommandBox as part of deployment script
Install CFConfig as part of deployment script
Use CFConfig to export a config.json file from an existing box that will share settings with the new deployment. Store this json file in source control for each type/env of box.
Use CFConfig to import the config.json as part of deployment script
Here's a simple example of what this looks like on debian
# Installs CommandBox
curl -fsSl https://downloads.ortussolutions.com/debs/gpg | apt-key add -
echo "deb https://downloads.ortussolutions.com/debs/noarch /" | tee -a /etc/apt/sources.list.d/commandbox.list
apt-get update && apt-get install apt-transport-https commandbox
# Installs CFConfig module
box install commandbox-cfconfig
# Import config settings
box cfconfig import from=/<path-to-config>/config.json to=/opt/ColdFusion/cfusion/ toFormat=adobe#11.0.19

Google cloud compute startup script ignored with no logging

I have a standard Debian 8.9 instance on google cloud compute (GCE) where my startup script is ignored.
In the custom metadata field, for startup-script, I am trying to run an Rscript (which is used for batch execution of R files), followed by a system shutdown, with the following:
#! /bin/bash
sudo /usr/bin/Rscript /home/myuser/launch_script.R
sudo shutdown -h now
Starting the instance is immediately followed by a shutdown and the Rscript is ignored. Removing the last line to shutdown causes the GCE instance to start, but the Rscript to be ignored. Running just "sudo /usr/bin/Rscript /home/myuser/launch_script.R" from the terminal results in the script being run. It has a chmod of 755, so I don't think this is a permissions issue.
In addition to this problem, I have read elsewhere that logging should happen in /var/log/, but there is nothing there. Instead, I have a bunch of log files (that only contain the start-up script and nothing else) in the root of my instance:
I got in touch with Google cloud support, who gave the following response:
script definition is kept under /var/run/google.startup.script
If the script does not run initially, you can force it manually with : $ sudo google_metadata_script_runner --script-type startup # for Debian, or # sudo /usr/share/google/run-startup-scripts # on Ubuntu and older images
I'm posting this information here, because it is not in their documentation (as of August 2017). I'm not sure how helpful it is, since the google.startup.script didn't exist in my case (using the latest Debian image on GCE), but I did run the other commands.
However, I think my main issues were:
I was using autossh to connect to a remote database. The startup-script was running before autossh. Building a 40 second delay into the script and running the script as a user (not sudo-type root) seems to have solved this problem for now. Autossh was being run as the main user, which I think gets loaded before lower-privilege user-defined scripts get loaded.
I was using some gcloud commands from the user account which had its own authentication issues. Running gcloud auth login as the user and ensuring correct permissions on my private key solved this.
Always remember to check the messages and syslog files in /var/log for troubleshooting. This allowed me to see the order of things being loaded at system-boot.

how to apply patch files in wso2 carbon

Can someone please run through the steps that you need to follow to apply .patch files onto WSO2 ESB v4.0.3? I've tried the following:
Upload .patch file to repository/components/patches and
carbon_home/lib/patches
Run wso2server.sh start -DapplyPatches
This command creates a dir called patch000 in the components/patches directory and fills it with plugins.
The patch I want to apply is https://issues.apache.org/jira/browse/TRANSPORTS-51
Many thanks
You need to apply .patch file to the code base and create jar(s) out of it. Then Create a folder with the name of the patch (eg: patch001) and place the jar(s) inside and copy newly created patch folder (eg: patch001) to repository/components/patches.
Now running the wso2server.sh -DapplyPatches will work.
when you execute -DapplyPatches, it takes a backup of the original content of the repository/components/plugins directory to repository/components/patches directory that's why you see patch000 folder (hence revert-back to a previous state is possible).
From carbon 4.2.0 onwards you don't need to provide -DapplyPatches option in order to apply the patch. When a server started up it automatically detects and apply if there are new patches.
This can be verified from the log file repository/logs/patches.log
WSO2 Official patch,
Read the readme file.(not a must step, better if you do)
Shutdown the server, if you have already started.
Copy the wso2carbon-version.txt file to /bin.(not a must step,better if you do)
Copy the patchNumber to /repository/components/patches/
Restart the server with :
Linux/Unix : sh wso2server.sh
Windows : wso2server.bat
Patch Created by you
Compile(mvn clean install) and get the jar from the modified code base. for example, let's say you are creating a patch for carbon-registry extensions. First, clone the carbon-registry and do the fix then go to the extensions module. Using maven build(mvn clean install) the jar. Then Create a folder with the name of the patch (eg: patch9999) and place the jar(s) inside and copy newly created patch folder (eg: patch9999) to repository/components/patches. Now simply restart/start the product and the patch will get applied.
./wso2server.sh restart/start
However, if the product is older than carbon 4.2.0 you have to provide -DapplyPatches attribute when starting the product like below.
./wso2server.sh -DapplyPatches
if the patch gets applied successfully you can see below set of lines in the beginning.
[2016-08-24 20:27:25,319] INFO {org.wso2.carbon.server.extensions.PatchInstaller} - Patch changes detected
[2016-08-24 20:27:27,980] INFO {org.wso2.carbon.server.util.PatchUtils.console} - Backed up plugins to patch0000
[2016-08-24 20:27:28,010] INFO {org.wso2.carbon.server.util.PatchUtils.console} - Patch verification started
[2016-08-24 20:27:28,034] INFO {org.wso2.carbon.server.util.PatchUtils.console} - Patch verification successfully completed.
As Sajith says, we have added -DapplyPatches as JVM parameter by default in wso2server.sh file.

Building project from cron task

When I build project from terminal by using 'xcodebuild' command I succeed, but when I try to do run same script from cron task I receive error
"Code Sign error: The identity '****' doesn't match any valid certificate/private key pair in the default keychain"
I think problem is in settings and permissions of crontab utility, it seems crontab does not see my keychain
Can anyone provide me terminal command how to make my keychain visible for crontab
I encountered a similar issue with trying to build nightly via cron. The only resolution I found was to create a plist in /Library/LaunchDaemons/ and load it via launchctl. The key necessary is "SessionCreate" otherwise you will quickly run in to problems similar to what was encountered with trying to use cron -- namely that your user login.keychain is not available to the process. "SessionCreate" is similar to "su -l" in that (as far as I understand) it simulates a login and thus default keychains you expect will be available; otherwise, you are stuck with only the System keychain despite the task running as your user.
I found the answers (though not the top answer currently) here useful in troublw shooting this issue: Missing certificates and keys in the keychain while using Jenkins/Hudson as Continuous Integration for iOS and Mac development
You execute your cron job with which account ?
most probably the problem !!
You can add
echo `whoami`
at the beginning of your script to see with which user the script is launched.
Also when a Bash script is launched from cron, it don't use the same environment variable (non login shell) as when you launch it as a user.
When the script launches from cron, it doesn't load your $HOME/.profile (or .bash_profile). Anything you run from cron has to be 100% self-sufficient in terms of it's environment. I'd suggest you make yourself a file called something like "set_build_env.sh" It should contain everything from your .profile that you need to build, such as $PATH, $HOME, $CLASSPATH etc. Then in your build script, load set_build_env.sh using the dot notation or source cmd as ericc said. You should also remove the build-specific lines from your.profile and then source set_build_env from there too so only one place to maintain. Example:
source /home/dmitry/set_build_env.sh #absolute path
. /home/dmitry/set_build_env.sh #dot-space notation same as "source"