Programming Geoserver 2.0.2 to add a new data store and layer without the UI - datastore

I have a directory of imagery that will be updated continually. From this imagery, I am making Image Pyramids using the Geotool's PyramidBuilder utility. I need to setup a cron job to automatically add new datastores and layers to Geoserver without using the UI.
After looking at the REST section of the Geoserver manual I was able to add my workspace, "testWS", but trying the create an ImagePyramid datastore did not work.
Since I have access to the datastore, I expanded on the shapefile example and tried to
curl -u admin:geoserver -XPUT -H 'Content-type: text/plain' \
-d '/opt/geoserver_data_dir/2.0.2/data/test_pyramid.pyr' \
"http://localhost:8080/geoserver/rest/workspaces/testWS/datastores/test_pyramid.pyr external.imagepyramid?configure=all"
Where test_pyramid.pyr is the base of my ImagePyramid at this location.
This gave me an error of "No such datastore: test_pyramid"
Is there a better way to add a new datastore and layer to Geoserver without manually adding each one via the UI? I need help in crafting the proper REST statement that will add an extisting ImagePyramid as a datastore and layer.
Is there some Java code that could do this? I looked at the Geoserver python extensions but they did not have this either.

You need to explore the RESTConfig module. It is included in GeoServer 2.1 but is a separate plugin in for 2.0. See http://docs.geoserver.org/2.0.0/user/extensions/rest/index.html for details.

Related

Cloud Run deployment using image from last revision

We need to deploy labels to multiple CLoud Run services using below API method
https://cloud.google.com/run/docs/reference/rest/v1/namespaces.services/replaceService
We are looking for options where we can apply labels using API without any new image deployment from Container Registry . We understand that there will be deployment and revision change while applying labels but we want that during deployment it should not pull new image from container registry rather it should use the image from last revision . Any configuration parameter in Cloud Run to prevent new images being pulled while applying labels using API or gcloud run services update SERVICE --update-labels KEY=VALUE
The principle of Cloud Run (and Knative, because the behavior is the same) is that the revision is immutable. Thus, if you change something in it, a new revision is created. You can't fake it!
So, the solution is to not use the latest tag of your image, but the SHA of it.
# the latest
gcr.io/PROJECT_ID/myImage
gcr.io/PROJECT_ID/myImage:latest
# A specific version
gcr.io/PROJECT_ID/myImage:SHA123465465dfsqfsdf
Of course, you have to update your YAML for this.

unable to delete custom plugin from datafusion instance

I tried uploading a custom jar as cdap plugin and it has few errors in it. I want to delete that particular plugin and upload a new one. what is the process for it ? I tried looking for documentation and it was not much informative.
Thanks in advance!
You can click on the hamburger menu, and click on Control Center at the bottom of the left panel. In the Control Center, click on Filter by, and select the checkbox for Artifacts. After that, you should see the artifact being listed in the Control Center, which then you can delete.
Alternatively, we suggest that while developing, the version of the artifact should be suffixed with -SNAPSHOT (ie. 1.0.0-SNAPSHOT). Any -SNAPSHOT version can be overwritten simply by reuploading. This way, you don't have to delete first before deploying a patched plugin JAR.
Actually each Data Fusion instance is running in GCP tenant project inside fully isolated area, keeping all orchestration actions, pipeline lifecycle management tasks and coordination as a part of GCP managed scenarios, thus you can make a user defined actions within a dedicated Data Fusion UI or targeting execution environment via CDAP REST API HTTP calls.
The purpose for using Data Fusion UI is to create a visual design for data pipelines, controlling ETL data processing through different phases of data executions, therefore you can do the same accessing particular CDAP API inventory.
Looking into the origin CDAP documentation you can find Artifact HTTP RESTful API that offers a set of HTTP methods that you can consider to manage custom plugin operations.
Referencing GCP documentation, there are a few simple steps how to prepare sufficient environment, supplying INSTANCE_URL variable for the target Data Fusion instance in order to smoothly trigger API functions within HTTP call methods against CDAP endpoint, i.e.:
export INSTANCE_ID=your-instance-id
export CDAP_ENDPOINT=$(gcloud beta data-fusion instances describe \
--location=us-central1 \
--format="value(apiEndpoint)" \
${INSTANCE_ID})
When you are ready with above steps, you can push a particular HTTP call method, approaching specific action.
For plugin deletion, try this one, invoking HTTP DELETE method:
curl -X DELETE -H "Authorization: Bearer ${AUTH_TOKEN}" "${CDAP_ENDPOINT}/v3/namespaces/system/artifacts/<artifact-name>/versions/<artifact-version>"

Some paster command not working in ckan 2.7.3

Trying to use the info in:
http://docs.ckan.org/en/ckan-1.4.3/authorization.html
to create users and assign roles to specifics package and the command right not working.
For instance:
paster --plugin=ckan rights -c /etc/ckan/default/development.ini list
I get error:
Command 'rights' not known (you may need to run setup.py egg_info)
Known commands:
celeryd Celery daemon [DEPRECATED]
check-po-files Check po files for common mistakes
color Create or remove a color scheme.
config-tool Tool for editing options in a CKAN config file
create Create the file layout for a Python distribution
create-test-data Create test data in the database.
datapusher Perform commands in the datapusher
dataset Manage datasets
datastore Perform commands to set up the datastore
db Perform various tasks on the database.
exe Run #! executable files
front-end-build Creates and minifies css and JavaScript files
help Display help
jobs Manage background jobs
less Compile all root less documents into their CSS counterparts
make-config Install a package and create a fresh config file/directory
minify Create minified versions of the given Javascript and CSS files.
notify Send out modification notifications.
plugin-info Provide info on installed plugins.
points Show information about entry points
post Run a request for the described application
profile Code speed profiler
ratings Manage the ratings stored in the db
rdf-export Export active datasets as RDF
request Run a request for the described application
search-index Creates a search index for all datasets
serve Serve the described application
setup-app Setup an application, given a config file
sysadmin Gives sysadmin rights to a named user
tracking Update tracking statistics
trans Translation helper functions
user Manage users
views Manage resource views.
but if I create a user like this:
paster sysadmin add seanh -c /etc/ckan/default/development.ini
works ok, so I don't think the problem was in my enviroment.
Note:
Centos 7.4
ckan 2.7.3
thanks
'Rights' was deprecated in the migration to CKAN 2.X, and the paster command removed.
From CKAN 2.0, permissions are organization by organization and by group. It's a simplification, catering for what is considered the most common use case.
However if you need to control user permissions on a single dataset (rather than all the datasets in an org/group together) then that dataset needs to be on its own in a org or group. Or you can customize the auth system using IAuthFunctions.

google cloud machine learning error

pleas help me
i cannot solve this
ERROR: (gcloud.beta.ml.models.versions.create) FAILED_PRECONDITION: Field: version.deployment_uri Error: The model directory gs://valued-aquifer-164405-ml/mnist_deployable_garu_20170413_150711/model/ is expected to contain exactly one of the following: the 'export.meta' file, or 'saved_model.pb' file or 'saved_model.pbtxt' file.Please make sure one of these files exists and you have read access to it.
I am new to Google Cloud. I have also got the same kind of issue. When I was trying to create version for model. I have resolved it.
you need to do two steps:
Export model --> it will give you saved_model.pbtxt, I am using tensorflow so I have used export_savedmodel()
Upload saved_model.pbtxt & variables folder to storage
and try
This command has since been updated to gcloud ml-engine versions create.
It is recommended to run gcloud components update to install the latest GCloud, then follow the new instructions for deploying your own models to Cloud ML Engine.
Note: If you experience issues with GCloud in the future, it is recommended to report the issue in a Public Issue Tracker.

WSO2 Business Activity Monitor cannot edit am_stats_analyzer

I am trying to fix the bug I am dealing with that is documented here https://wso2.org/jira/browse/APIMANAGER-2032
When I go into my BAM 2.4.1 admin console and go to "Home > Manage > Analytics > List" to try and make the change to my am_stats_analyzer script, I am unable to edit it (that option is not available).
Does anyone know another way to update this script so it no longer throws this exception?
Editing the hive scripts which was deployed by any toolboxes are not recommended; since during the restart of the server OR any redeployment of the same toolbox, will cause loss of your local changes. Therefore the edit option of the hive script that was deployed via a toolbox was removed; if you further need to do the changes in hive scripts via BAM management console, then you need to click on Copy New Script, and do you changes there and save it with another name.
If you want to do the modifications to the same script that was deployed by APIM toolbox, then you need do the changes to the toolbox it self. Extract the toolbox, and Go to analytics directory and edit the hive script what you interested on, and then again zip it, and rename with .tbox extension. Now redeploy your updated toolbox in BAM.