AWS Comprehend NOT COMPLETED - amazon-web-services

I try to do COMPREHEND but immediately the command ends and the status remains as SUBMITTED
Does anyone know the problem and know how I can get the output?
I checked the job id status and got this:

You can check status of job on describe-pii-entities-detection-job with job-id from start job response.

Related

Report request stuck in "INQUEUE" status

I'm trying to request the "GET_MERCHANTS_LISTINGS_FYP_REPORT" report from Amazon SP API, the request went fine when checking the status I receive the "INQUEUE" status. It's been like that since yesterday, any idea why this report gets stuck. The start date is 1 day before today so I don't get much data.
I've also tried to request using scratchPad, and there it's been stuck in "IN_PROGRESS" for 5 days, see image
image from scratchPad
any idea of what is happening?

Get status of scheduler job from python

I have a scheduled job running on Cloud Scheduler, and I would like to get its status ("Success", "Failed") from python. There is a python client for cloud scheduler here but can't find documentation on how to get the status.
You can get the status with the library like that
from google.cloud.scheduler import CloudSchedulerClient
client = CloudSchedulerClient()
print(client.list_jobs(parent="projects/PROJECT_ID/locations/LOCATION"))
I chose list_job but you can also use get job.
In the JSON object that you receive, you have a status field. If empty (meaning no error), the latest call was in success. If not, it was in error and you have the GRPC error code in the field.

HERE Batch Geocoder Accepted but never Finishes

I've been evaluating moving our Mapping and Routing apps to use HERE's Rest API. I've been testing some scenarios to proof it out and one I can't seem to get working correctly is the Batch Geocoding.
The submission of the data to Geocode works fine and I do get a valid RequestID back but when I poll for the status of the Batch Job the status always says "accepted" but never seems to change.
I am using a developer account that has a 90 day trial. Could there be a limitation due to the type of account?
Looks like it's a queue issue, except mine has been going on for nearly a week.
HERE API never runs batch job, always returns accepted status

Google Dataflow "Workflow failed" with no reason

I am running Dataflow-Jobs on Google Cloud Platform and one new Error I get is "Workflow failed" without any explanations.
The logs I get are the following:
2017-08-25 (00:06:01) Executing operation ReadNewXXXFromStorage/Read+JsonStringsToXXX+RemoveLanguagesFromXXX...
2017-08-25 (00:06:01) Executing operation ReadOldXYZ_ABC_1234_123_ns_123123123123123/GroupByKey/Create
2017-08-25 (00:06:01) Starting 1 workers in europe-west1-b...
2017-08-25 (00:06:01) Executing operation ReadOldXYZ_ABC_1234_123_ns_123123123123123/ParDo(SplitQuery)+ReadOldXYZ...
2017-08-25 (00:06:48) Workflow failed.
2017-08-25 (00:06:48) Stopping worker pool...
2017-08-25 (00:06:58) Worker pool stopped.
How am I supposed to find out whats going wrong? It should not be a problem with rights on the object, as similar jobs run successfully.
When I try to rerun the template from Google Cloud Console, I get the message:
No metadata file found for this template
But I am able to start the template and now it runs successfully. May this have to do with exceeded quotas? We just increased our CPU and IP-Quota for Dataflow and I increased our parallel running jobs from 5 to 15 to be able to use the quota. When I rerun the template without any other Jobs running, everything seems to work fine.
Any Input is highly appreciated. Thanks
EDIT: Seems like the Jobs failed because of exceeded CPU-Quota, but usually we would get an error-description where it says "could not spawn enough workers". Nevertheless, Everything works fine after I reduced the maximum number of workers per job, so that our quota cannot be exceeded.
I believe the "No metadata file found for this template" should be considered a warning, not an error. A template is able to have a "metadata" file associated with it which allows validation of parameters. If no such file is present, the parameters aren't validated, but everything else works as normal -- the message is just the indicator of this situation.
It sounds like the problem was the job being unable for other reasons. Based on your description and the edit, it sounds like this was because of lack of quota to run the job.

Can Control-M execute a http service endpoint to GET job status?

I am very new to control-m and wanted to ask if control-m supports this scenario:
We have a http webservice that runs a log running job e.g.
http://myserver/runjob?jobname=A
This will then start job A on the server and returns a job id back. I return job id so i can get status of the job from the server when ever i want to. The job has many statuses e.g. Waiting, In progress, error
I want the control-m job status to be updated as soon as the job on the server updates. For that, I have created a webservice url:
http://localhost/getjobsatus?jobid=1
This url request will get the job status of the job id 1
Can control-m poll a web service url for a job status and can I call a web service to run a job and get its id back?
Apologies for asking this basic level question. Any help will be really appreciated.
Welcome to the Control-M community :-)
You can implement 2 Control-M WebServices jobs (available with BPI – Business Process Integration Suite), one to submit your job and get its ID, and one to track its status.
Alternatively you can implement this in 1 Control-M OS type job using the ctmsubmit command inside a script…
Feel free to join our Control-M online community