just in case if someone here might know about this
I am implementing this with c++ in Windows environment
this is the url I received in earlier file listing, I did cut the tokens away to keep it shorter ,
when I send request like this to network , I only get bad request error
https://www.googleapis.com/drive/v2/files?maxResults=20&pageToken=valid_page_token&access_token=valid_access_token>
but when I add underscore _ to change pageToken to page_Token, I get file listing response from google drive .
https://www.googleapis.com/drive/v2/files?maxResults=20&page_Token=valid_page_token&access_token=valid_access_token>
So I wonder , which way this should be now, or do I always need to manipulate the string to get following request to be working fine
Related
I try to get return report from amazon, but my request is always cancelled. I have working request report using
'ReportType' => 'GET_MERCHANT_LISTINGS_DATA',
'ReportOptions' => 'ShowSalesChannel=true'
I modify it by changing ReportType and removing ReportOptions. MWS accept request by its always cancelled. I also try to find any working example of it on google but also without success. Meybe somone have working example of it? I can downolad report when I send request from amazon webpage. I suppose it require ReportOptions, but I dont know what to put in this place (I have only info ReportProcessingStatus CANCELLED). Normally I choose Day,Week,Month. I check on amazon docs but there isnt many informations https://docs.developer.amazonservices.com/en_US/reports/Reports_RequestReport.html
Any ideas?
Overview
I wish to collect the contents of a browser's address bar opened by a function in a program in C / C++. There are a few threads here which discuss the matter. However, none seems to be helpful to me.
My environment
OS : Windows 7, Windows 10.
Development language : C / C++
My project
I am writing an app in which I need to collect data from a server. The server requires the client to authenticate itself and uses the 2-step OAuth 2.0 protocol for that. I need to make use of a web API developed by a third party.
The following page describes the whole process.
https://apidocs.getresponse.com/v3/case-study/oauth2-authorization-code
However, I only have a problem with the first step : obtain an authorization code from the server.
A highlight from this page explains the process for the first step, the only one that matters here :
Want to see by yourself ? Try this.
I have created an account and registered a bogus app on getresponse.com for testing purpose.
Navigate to the following URL :
https://app.getresponse.com/oauth2_authorize.html?response_type=code&client_id=41979979-c18b-11ea-bb1c-00163ec8ce26&state=xyz
Login with :
Your email : jnj54972#cuoly.com
Password : #Aa11111
On the next screen, Click Yes.
After redirection to the example.com site, the next screen shows the following in the address bar :
http://example.com/receiver?code=<code>&state=xyz
This code in the address bar is precisely what I need to continue with the second step of the authentication when this page is displayed in the browser. Hence the necessity to collect the data contained in the address bar.
You can repeat the operation and navigate again to the same URL: you will not have to login again, and you will obtain another authorization code.
(Note : To test the Oauth 2.0 protocol on getresponse.com, I created an app on 9 July 2020. This account has a validity of 30 days. Therefore, the login credentials mentioned above are likely to expire a month after the date of creation.)
What I have tried so far
I won't go in details or this post may get too long. But I did try numerous 'curl GET' requests with various parameters. No luck : I never get the browser's address bar data with the code in return.
Can someone help ?
Here is a list of ways you could use to accomplish your task:
Hook a function that changes the address bar text in the browser. This can be achieved using remote dll/code injection and have the injected code send back the results to your main process by using shared memory or other interprocess communications methods
Get the memory address of the buffer holding the address bar text (memory scanners such as CE) then actively scan for changes in that address for your desired text which in this case is code=
Create a browser extension that listens for url change events in tabs and have it send the results back to your process using sockets preferably
I've set up an AWS api which obtainins a pre-signed URL for uploading to an AWS S3 bucket.
The pre-signed url has a format like
https://s3.amazonaws.com/mahbukkit/background4.png?AWSAccessKeyId=someaccesskeyQ&Expires=1513287500&x-amz-security-token=somereallylongtokenvalue
where backgournd4.png would be the file I'm uploading.
I can successfully use this URL through Postman By:
configuring it as a PUT call,
setting the body to Binary so I can select the file,
setting the header to Content-Type: image/png
HOWEVER, I'm trying to make this call using BrightScript running on a BrightSign player. I'm pretty sure I'm supposed to be using the roURTransfer object and PutFromFile function described in this doucmentation:
http://docs.brightsign.biz/display/DOC/roUrlTransfer
Unfortunately, I can't find any good working examples showing how to do this.
Could anyone who has experience with BrightScript help me out? I'd really appreciate it.
you are on the right track.
i would do
sub main()
tr = createObject("roUrlTransfer")
headers = {}
headers.addreplace("Content-Type","image/png")
tr.AddHeaders(headers)
info = {}
info.method = "PUT"
info.request_body_file = <fileName>
if tr.AsyncMethod(info)
print "File put Started"
else
print "File put did not start"
end if
delay(100000)
end sub()
note i have used two different methods to populate the two associative arrays. you need to use the addreplace method (rather then the shortcut of .) when the key contains special characters like '-'
this script should work , though i don't have a unit on hand to do a syntax check.
also you should set up a message port etc and Listen to the event that is generated to confirm if the put was successful and/or what the response code is.
note when you read responses from url events. if the response code from the server is anything other then 200 the BrightSign will trash the response body and you can not read it. This is not helpful as services like dropbox like to do a 400 response with more info on what was wrong (bad API key etc) in the body. so in that case you are left in the dark doing trial and error to figure out what was wrong.
good luck, sorry i didn't see this question sooner.
I working on a service which scrapes specific links from blogs. The service makes calls to different sites which pulls in and stores the data.
I'm having troubles specifying the url for updating the data on the server where I now use the verb update to pull in the latest links.
I currently use the following endpoints:
GET /user/{ID}/links - gets all previously scraped links (few milliseconds)
GET /user/{ID}/links/update - starts scraping and returned the scraped data (few seconds)
What would be a good option for the second url? some examples I came up with myself.
GET /user/{ID}/links?collection=(all|cached|latest)
GET /user/{ID}/links?update=1
GET /user/{ID}/links/latest
GET /user/{ID}/links/new
Using GET to start a process isn't very RESTful. You aren't really GETting information, you're asking the server to process information. You probably want to POST against /user/{ID]/links (a quick Google for PUT vs POST will give you endless reading if you're curious about the finer points there). You'd then have two options:
POST with background process: If using a background process (or queue) you can return a 202 Accepted, indicating that the service has accepted the request and is about to do something. 202 generally indicates that the client shouldn't wait around, which makes sense when performing time dependent actions like scraping. The client can then issue GET requests on the first link to retrieve updates.
Creative use of Last-Modified headers can tell the client when new updates are available. If you want to be super fancy, you can implement HEAD /user/{ID}/links that will return a Last-Modified header without a response body (saving both bandwidth and processing).
POST with direct processing: If you're doing the processing during the request (not a great plan in the grand scheme of things), you can return a 200 OK with a response body containing the updated links.
Subsequent GETs would perform as normal.
More info here
And here
And here
During these days I would investigate about the mod_security integration with Google Safe Browsing but I can't generate the local GSB database with the HTTP call. Firstly I generated an API Key related to my Google user and if I tried to call the URL to retrieve the lists all works.
http://safebrowsing.clients.google.com/safebrowsing/list?client=api&apikey=<myapikey>&appver=1.0&pver=2.2
After that I generated the new MAc KEY over ssl call as reported below:
https://sb-ssl.google.com/safebrowsing/newkey?client=api&apikey=<myapikey>&appver=1.0&pver=2
I received a 200 responde with a clientkey (length 24 chars) and a wrappedkey (length 100 chars). After that I tried to download the list with the call below, but I received a 400 response.
http://safebrowsing.clients.google.com/safebrowsing/downloads?client=api&apikey=<myapikey>&appver=1.0&pver=2.2&wrkey=<mywrappedkey>.
Did someone find the same behavior?