Connect Adobe Flex To Arcgis Service? - web-services

I'm trying to connect my flex app to my Arcgis webservices. I tried using the connect to webservice interface. But I keep getting this error.
There was an error during service
introspection. WSDLException:
faultCode=PARSER_ERROR: Problem
parsing
'http://localhost/ArcGIS/rest/services/geodata/MapServer'.:
org.xml.sax.SAXParseException: The
element type "link" must be terminated
by the matching end-tag "/link".
My web service looks like this
ArcGIS Services Directory Home >
geodata (MapServer) Help | API
Reference geodata (MapServer) View In:
ArcMap ArcGIS Explorer ArcGIS
JavaScript Google Earth ArcGIS.com
Map
View Footprint In: Google Earth
Service Description:
Map Name: Layers
Legend
All Layers and Tables
Layers:
Geocoding_Result layer (0)
Tables:
Description:
Copyright Text:
Spatial Reference: 4326
Single Fused Map Cache: false
Intial Extent:
XMin: -95.901360470612
YMin: 29.4513469530748
XMax: -95.1472749640384
YMax: 30.045474927951
Spatial Reference: 4326
Full Extent:
XMin: -100.3273442
YMin: 29.451583
XMax: -94.8230278
YMax: 31.250677
Spatial Reference: 4326
Units: esriDecimalDegrees
Supported Image Format Types:
PNG24,PNG,JPG,DIB,TIFF,EMF,PS,PDF,GIF,SVG,SVGZ,AI,BMP
Document Info:
Title:
Author:
Comments:
Subject:
Category:
Keywords:
Credits:
Supported Interfaces: REST SOAP
Supported Operations: Export Map
Identify Find

Maybe you already know... but if you are trying to connect to an ArcGIS Server using Flex you might be interested in the ArcGIS API for Flex - http://links.esri.com/flex - it will take care of most of that for you.

Antarr,
It's a little hard to tell from your question what exactly you are trying to do. But here are a couple possibilities:
1) It looks like you might be trying to add a reference to this service via Flash Builder's "Connect to Web Service" dialog, which you would use for a SOAP web service, but not for the REST endpoint you note above (http://localhost/ArcGIS/rest/services/geodata/MapServer). If you are intending to use the REST endpoints, then you need to use the appropriate ESRI ArcGIS API for Flex class (for example DynamicMapServiceLayer or Locator) for whatever you are trying to do (generate a map image, geocode addresses, etc). Look at the ESRI help on the Flex API for more information:
http://help.arcgis.com/en/webapi/flex/apiref/index.html
2) If you are intending to use the ESRI SOAP API then you do want to use Flash Builder's "Connect to Web Service" dialog, but then you must use the SOAP service endpoint, which would be something like: http://localhost/ArcGIS/services/geodata//MapServer?wsdl (though I don't know why you'd want to do this since the Flex API is really designed to be used with ESRI's REST service endpoints).
3) The only layer in your service is called "Geocoding_Result" - is that an actual feature layer or just a temporary output from a geocoding operation done with ArcMap? I am not sure whether a temporary result would work when published as a service.
See if any of these suggestions help. If not, then clarify what you are trying to do and perhaps I can give you more specific assistance.

Related

Can you start AI platform jobs from HTTP requests?

I have a web app (react + node.js) running on App Engine.
I would like to kick off (from this web app) a Machine Learning job that requires a GPU (running in a container on AI platform or running on GKE using a GPU node pool like in this tutorial, but we are open to other solutions).
I was thinking of trying what is described at the end of this answer, basically making an HTTP request to start the job using project.job.create API.
More details on the ML job in case this is useful: it generates an output every second that is stored on Cloud Storage and then read in the web app.
I am looking for examples of how to set this up? Where would the job configuration live and how should I set up the API call to kick off that job? Are the there other ways to achieve the same result?
Thank you in advance!
On Google Cloud, all is API, and you can interact with all the product with HTTP request. SO you can definitively achieve what you want.
I personally haven't an example but you have to build a JSON job description and post it to the API.
Don't forget, when you interact with Google Cloud API, you have to add an access token in the Authorization: Bearer header
Where should be your job config description? It depends...
If it is strongly related to your App Engine app, you can add it in App Engine code itself and have it "hard coded". The downside of that option is anytime you have to update the configuration, you have to redeploy a new App Engine version. But if your new version isn't correct, a rollback to a previous and stable version is easy and consistent.
If you prefer to update differently your config file and your App Engine code, you can store the config out of App Engine code, on Cloud Storage for instance. Like that, the update is simple and easy: update the config on Cloud Storage to change the job configuration. However there is no longer relation between the App Engine version and the config version. And the rollback to a stable version can be more difficult.
You can also have a combination of both, where you have a default job configuration in your App Engine code, and an environment variable potentially set to point to a Cloud Storage file that contain a new version of the configuration.
I don't know if it answers all your questions. Don't hesitate to comment if you want more details on some parts.
As mentionated, you can use the AI Platform api to create a job via a post.
Following is an example using Java Script and request to trig a job.
Some usefull tips:
Jobs console to create a job manually, then use the api to list this job then you will have a perfect json example of how to trig it.
You can use the Try this API tool to get the json output of the manually created job. Use this path to get the job: projects/<project name>/jobs/<job name>.
Get the authorization token using the OAuth 2.0 Playground for tests purposes (Step 2 -> Access token:). Check the docs for a definitive way.
Not all parameters are required on the json, thtas jus one example of the job that I have created and got the json using the steps above.
JS Example:
var request = require('request');
request({
url: 'https://content-ml.googleapis.com/v1/projects/<project-name>/jobs?alt=json',
method: 'POST',
headers: {"authorization": "Bearer ya29.A0AR9999999999999999999999999"},
json: {
"jobId": "<job name>",
"trainingInput": {
"scaleTier": "CUSTOM",
"masterType": "standard",
"workerType": "cloud_tpu",
"workerCount": "1",
"args": [
"--training_data_path=gs://<bucket>/*.jpg",
"--validation_data_path=gs://<bucket>/*.jpg",
"--num_classes=2",
"--max_steps=2",
"--train_batch_size=64",
"--num_eval_images=10",
"--model_type=efficientnet-b0",
"--label_smoothing=0.1",
"--weight_decay=0.0001",
"--warmup_learning_rate=0.0001",
"--initial_learning_rate=0.0001",
"--learning_rate_decay_type=cosine",
"--optimizer_type=momentum",
"--optimizer_arguments=momentum=0.9"
],
"region": "us-central1",
"jobDir": "gs://<bucket>",
"masterConfig": {
"imageUri": "gcr.io/cloud-ml-algos/image_classification:latest"
}
},
"trainingOutput": {
"consumedMLUnits": 1.59,
"isBuiltInAlgorithmJob": true,
"builtInAlgorithmOutput": {
"framework": "TENSORFLOW",
"runtimeVersion": "1.15",
"pythonVersion": "3.7"
}
}
}
}, function(error, response, body){
console.log(body);
});
Result:
...
{
createTime: '2022-02-09T17:36:42Z',
state: 'QUEUED',
trainingOutput: {
isBuiltInAlgorithmJob: true,
builtInAlgorithmOutput: {
framework: 'TENSORFLOW',
runtimeVersion: '1.15',
pythonVersion: '3.7'
}
},
etag: '999999aaaac='
Thank you everyone for the input. This was useful to help me resolve my issue, but I wanted to also share the approach I ended up taking:
I started by making sure I could kick off my job manually.
I used this tutorial with a config.yaml file that looked like this:
workerPoolSpecs:
machineSpec:
machineType: n1-standard-4
acceleratorType: NVIDIA_TESLA_T4
acceleratorCount: 1
replicaCount: 1
containerSpec:
imageUri: <Replace this with your container image URI>
args: ["--some=argument"]
When I had a job that could be kicked off manually, I switched to using
the Vertex AI Node.js API to start the job or cancel it. The API exists in other languages.
I know my original question was about HTTP requests, but having an API in the language was a lot easier for me, in particular because I didn't have to worry about authentification.
I hope that is useful, happy to provide mode details if needed.

Using Glyphs with Amazon Location Service and Mapbox-GL

I am using the Amazon Location Service with React, react-map-gl and Mapbox-GL. I can successfully load ESRI and HERE maps which suggests my authentication is OK but I seem to have trouble with accessing Glyphs (fonts). I am trying to add a cluster markers feature like this. I can add the points and load the base layer but when I try to add the point counts there is an error accessing the glyph. It is sending a request like this:
https://maps.geo.eu-west-1.amazonaws.com/maps/v0/maps/<MY_MAP>/glyphs/Noto%20Sans,Arial%20Unicode/0-255.pbf?<....SOME_AUTHENTICATION_STUFF>
This seems to match the request format shown here: https://docs.aws.amazon.com/location-maps/latest/APIReference/location-maps-api.pdf
But it responds with: {"message":"Esri glyph resource not found"}
I get a similar error message with HERE maps and different fonts. I have added the following to the action on the role with no success (it loads the map but not glyphs)
Tried this:
"geo:GetMap*"
And this:
"geo:GetMapStyleDescriptor",
"geo:GetMapGlyphs",
"geo:GetMapSprites",
"geo:GetMapTile"
What do I have to do to setup glyphs correctly in the Amazon Location Service? I have not configured anything just hoped they would naturally work. Have I missed a step? Can't see anything online about it.
Is there a work around where I could load the system font instead of a remote glyph?
I am using the following versions which are not the most recent as the most recent are incompatible with Amazon Location Service:
"mapbox-gl": "^1.13.0",
"react-map-gl": "^5.2.11",
The default font stack (Noto Sans, Arial Unicode) for the cluster layer isn't currently available via Amazon Location. You will need to change the font stack used by the cluster layer to something in the supported list: https://docs.aws.amazon.com/location-maps/latest/APIReference/API_GetMapGlyphs.html#API_GetMapGlyphs_RequestSyntax

Internationalization on serverless backend (AWS)

I'm building a serverless node.js web app on AWS (using Serverless Framework) and trying to implement internationalization on the backend (API Gateway/Lambda/DynamoDB).
For front-end(React), I use redux to store the selected language and react-intl to switch multiple languages. For the backend, what's the best way to implement internationalization?
Here are two ways I can think of, but there must be better ones.
A. Translate on the backend (Get language from path parameter)
path: {language}/validate
validate.js
export function main(event, context, callback) {
const language = event.pathParameters.language;
const data = JSON.parse(event.body);
callback(null, validate(language, data));
}
This way, I need to pass the language as a function parameter to everywhere, which is not desirable.
B. Translate on front-end (i18n, react-intl)
backend hello.js response
{
id: "samplePage.message.hello",
defaultMessage: `Hello, ${name}`,
values: { name }
}
frontend hello.js
<FormattedMessage {...response} />
ja.json (translation file for i18n)
{
"samplePage.message.hello": "こんにちは、{name}。",
}
This way, it looks like everything works fine without any trouble, but am I missing anything?
We do the same as you suggest in B)...basically we have our backend on AWS lambda and access data from dynamodb.
All our translation happens in the frontend. Only difference we use i18next (more specific react-i18next but makes no difference if this or react-intl -> just offers a little more backends, caching, language detection,... https://www.i18next.com/).
If you like to learn more or see it in action checkout https://locize.com (or directly try it at https://www.locize.io/ 14d free trial) while the app currently only is available in english all the texts comes in via xhr loading and get applied on runtime (i18n).
If interested in how we use serverless at locize.com see following slides from a speech we gave last year: https://blog.locize.com/2017-06-22-how-locize-leverages-serverless/
Last but not least...if you like to get most out of your ICU messages and validation, syntax highlighting and proper plural conversion and machine translation by not destroying the icu dsl during MT -> Just give our service a try...it comes with 14d free trial.

Google Text to Speech Cloud service javascript

I am looking for a guide on how to use google text to speech service in Java script. Currently I am using this:
var src = "https://translate.google.com/translate_tts?key='+key +'8&total=1&idx=0&textlen=32&client=tw-ob&q=" + encodeURIComponent(txt) + "&tl=" + language;
console.log(src)
var vid = $('#Audio');
vid.get(0).pause();
$('#Audio').attr('src', src);
vid.get(0).load();
vid.get(0).play();
The main issue is that this code is not stable. Sometimes it returns empty audio and sometimes it works for same request.
It seems this service is not added to google-cloud-platform. It was before under google translate but not anymore. using the link in the question will work if there is user interaction , like pressing a button,. However,calling it dynamically in the code without user interaction will result in an empty audio file. It looks like someway from google to prevent denial of service attack. I ended up using speechSynthesis for languages which are supported in speechSynthesis and third party products for other languages such as Arabic.

WSO2 integration approach

We are planning to use WSO2 in the scenario where we are receiving a file from people soft application with customer/user details (file size: approx 2GB) and that need to be inserted in SAP success factor. We want to use the WSO2 product for integration between People soft and SAP. We were looking into some of the WSO2 product to achieve this, like DSS for streaming file or batch processing and ESB to use for our scenario.
Can DSS/ESB help in our scenario for streaming the data from file and call SAP webservice to create user?
Is there any approach in WSo2 Product to read row by row from source (here txt /csv file) and then do transformation and then call webservice to create data in target system. Please advise.
You can read a file in streaming mode with WSO2 ESB (with a "VFS" proxy) and use smooks mediator with a smooks config file describing your csv structure.
You will find different samples over the net, one of them : http://vvratha.blogspot.fr/2014/05/processing-large-text-file-using-smooks.html
In most of the samples, when smooks split the content of your huge file in small parts, routing fragments in JMS or in other files, you will find a "highWaterMark" in the config with an attribute named "mark" with a huge value : you absolutly need to replace this value with -1 to avoid poor performance