Fetching Live update from cricinfo - regex

I am working on fetching live updates from http://www.espncricinfo.com/wcldiv4-2012/engine/match/576414.html and what I have been able to do is fetch the live scorecarads with wget and regex, parsing the "title" of the page (which is scorecard).
But I am not able to fetch the commentary which when I use "firebug" I can see but when I wget to fetch the page those commentaries doesn't show up.
Is there anyway to use firebug with command line ? ( was wondering if I can then I could fetch those results.)
or what is the way to fetch those auto-update commentaries ?

I guess you should be able to use the following RSS on your site. You need to parse the relevant feed and and get going.
http://www.espncricinfo.com/ci/content/rss/feeds_rss_cricket.html
I doubt if cricinfo has its own API that you could leverage.
UPDATE 1: Cricinfo leverages yahoo pipes and merges these feeds, you can also leverage the same and the get the resultant pipe in JSON or XML format:
A simple pipe is here e-g:
http://pipes.yahoo.com/pipes/pipe.info?id=tMcVGcqn3BGvsT_2R2EvQ

Related

How to download attachment file in Postman

I am using postman for API calls as I am trying to download on batch thousands of files from a database.
https://website.com/services/rest/connect/v1.4/incidents/439006/fileAttachments/?download
This creates the call but I then have to click save response -> save to a file to download the attachment.
Is this something that's possible in postman?
My IT department is very strict with downloading development environments I only have postman and R Studio. I know you can use RCURL potentially but considering I don't know how to use CURL I don't know where to start.
What I want to do is download any attachment (if it exists) for a number of keys.
And in a loop call the file:
i = o
Start loop i in n#rows(file)
i = i+
key = key(i)
https://website.com/services/rest/connect/v1.4/incidents/{key}/fileAttachments/?download
Save to file (named key).
Loop
I can't get it to work, I want a file full of thousands of downloads each called the number of the key.
You can use below command in linux system.
curl -v "https://website.com/services/rest/connect/v1.4/incidents/{key}/fileAttachments/?download"
You can use below command in linux system.
curl -v "https://website.com/services/rest/connect/v1.4/incidents/{key}/fileAttachments/?download"
Also you can try to send the request from postman and try to save that response.
Let me know if that works.

extract all aws transcribe results using boto3

I have a couple hundred transcribed results in aws transcribe and I would like to get all the transcribed text and store it in one file.
Is there any way to do this without clicking on each transcribed result and copy and pasting the text?
You can do this via the AWS APIs.
For example, if you were using Python, you can use the Python boto3 SDK:
list_transcription_jobs() will return a list of Transcription Job Names
For each job, you could then call get_transcription_job(), which will provide the TranscriptFileUri that is the location where the transcription is stored.
You can then use get_object() to download the file from Amazon S3
Your program would then need to combine the content from each file into one file.
See how you go with that. If you run into any specific difficulties, post a new Question with the code and an explanation of the problem.
I put an example on GitHub that shows how to:
run an AWS Transcribe job,
use the Requests package to get the output,
write output to the console.
You ought to be able to refit if pretty easily for your purposes. Here's some of the code, but it'll make more sense if you check out the full example:
job_name_simple = f'Jabber-{time.time_ns()}'
print(f"Starting transcription job {job_name_simple}.")
start_job(
job_name_simple, f's3://{bucket_name}/{media_object_key}', 'mp3', 'en-US',
transcribe_client)
transcribe_waiter = TranscribeCompleteWaiter(transcribe_client)
transcribe_waiter.wait(job_name_simple)
job_simple = get_job(job_name_simple, transcribe_client)
transcript_simple = requests.get(
job_simple['Transcript']['TranscriptFileUri']).json()
print(f"Transcript for job {transcript_simple['jobName']}:")
print(transcript_simple['results']['transcripts'][0]['transcript'])

Not able to delete multiple campaigns using Postman from Eloqua

I have been trying to delete multiple campaigns from Eloqua at a time using Postman. But I am not able to do. I don't see reference in the tool as well http://docs.oracle.com/cloud/latest/marketingcs_gs/OMCAB/index.html#Developers/RESTAPI/REST-API.htm%3FTocPath%3D%2520Application%2520API%7C_____0.
Please let me know if deleting the multiple campaigns is possible.
It is not possible.
The link you provided mentions it's outdated, and a redirection link was available: http://docs.oracle.com/cloud/latest/marketingcs_gs/OMCAC/rest-endpoints.html
Have a look at all the DELETE methods over there, and you will see that there is no provision for sending more than one id at a time.
Edit: You say you are using Postman. It is possible to perform repetitive tasks (like deleting mulitple campaigns) with different parameters each time by using Collections.
Edit 2:
Create an environment,
Type your url with the id as a variable, e.g.: xyz.com/delete/{id}
And send all the id values as a JSON or CSV file. They have given a sample JSON, you would simply have to provide your ids inside an array, e.g.:
[
{"id":1},
{"id":2},
{"id":3}
]

GoCD compare output in json in bash

My situation is that i need to be able to get the gocd compare result in the form of json in bash, so that i can extract the commit ids from the output.
https://<go server url>/go/compare/<pipeline name>/<old build>/with/<new build>
The output of the above url in a GET (with authentication is an html document).
However i need to be able to get this in JSON format so that i can fetch the commit ids list from the comparison page output. Please suggest if there are any ideas.

REST uri to get data from community junction(CJ) fails

I am making REST calls to download reports from commission junction(CJ) but not able to fetch data. Instead I am able to manually download report with the desired column as
Account name,Campaign name,Ad group,Destination URL,Ad distribution,Impressions,Clicks, CTR,Average,CPC,Spend,Avg. position.
The REST uri I am using is
https://commission-detail.api.cj.com/v3/commissions?date-type=posting&start-date=2013-03-14&end-date=2013-04-14&action-types=impression
Is this the right REST uri to get the data for desired columns mentioned above.Please suggest.
I believe using v3 you can only get yesterdays data.
http://www.ericnagel.com/how-to-tips/commission-junction-web-services.html