I have a Fixer.io API to get Current currency rate , i want to get the currency rate in PLSQL. Before this i have tried in Php to get the rate . Now i want to call the same in PLSQL to get the current rate. What i have tried in plsql
declare
req UTL_HTTP.REQ;
BEGIN
req := UTL_HTTP.BEGIN_REQUEST('http://data.fixer.io/api/latest?access_key=access_key');
UTL_HTTP.SET_HEADER(req, 'User-Agent', 'Mozilla/4.0');
UTL_HTTP.GET_RESPONSE(req);
dbms_output.put_line('hitting');
EXCEPTION
WHEN UTL_HTTP.END_OF_BODY THEN
dbms_output.put_line('exception');
END;
its not working and giving the below error
GET_RESPONSE' is not a procedure or is undefined
need Suggestion regarding according to requirements
The issue is that GET_RESPONSE is a function not a procedure, therefor
you need to return it back into a variable see https://docs.oracle.com/database/121/ARPLS/u_http.htm#ARPLS70993 for a more detailed description.
You will need to declare a response variable
resp UTL_HTTP.RESP;
then your call to get response would look like
resp := UTL_HTTP.GET_RESPONSE(req);
Related
Ok I have been stuck on this for a few weeks now. I'm using the Front Email API for a business use case and have created an iterative function to (attempt to) get multiple pages of query results.
A quick overview of the API (endpoint) I'm using for context:
The "events" endpoint returns a list of records based on the parameters given in the query (like before/after/between certain times, types of events, etc.)
If the query results in more than 100 records, a "next" pagination link is generated for the next page(s) of results. There is no "page=n" parameter in the query URL, you only get the "next page" link from the response of the previous query (which is fairly unique)
A side note, the initial base_url for the first query, and the base_url of the "next page" link are two different urls (i.e. the initial call is https://api2.frontapp.com and the second is https://companynamehere-inc.api.frontapp.com), so this is taken into consideration in my querying function.
My environment looks like this:
As you can see, I query the initial URL using the external Func_Query_Front_API function, then begin the iteration; While the next page link is not null, keep feeding the next links returned from the previous calls back into the function to get the next page of results. I deconstruct the links given to the function into a base, relative path and body/params so that I can use this logic in both Desktop and Online Service (don't ask).
It's difficult to describe the results I get, because sometimes in the preview window, it just clocks and clocks and doesn't return any results with the API not being queried at all (confirmed from Postman and the rate limit remaining number in the response headers). When I do get results back, it's a massive list of records (way more than what I'm querying for/expecting to receive) that contains lots of duplicates. It's almost like the initial (or the second) query URL is being looped over and over again, but not the "next" page's links? Like it's somehow stuck in a semi-infinite loop while the "next" link is not null from the initial response only, so it just repeats the same query over and over again re-using the same "next" page link.
Now, unfortunately I cannot provide my bearer token for this API as it's private company info returned by the API, but I'm wondering if the issue is with my syntax in the code that someone can spot and steer me in the right direction. The querying function itself works when invoked on its own, and everything looks like it SHOULD work, but it's just not doing what I think the code is saying it should do.
I understand it's not much to go on, but I'm out of options in trying to debug this thing. Thanks!
UPDATE
I think what MIGHT help here is a working code written in Python that might help to translate what I'm looking for into Power BI, so I've provided a working code in Python below (again though, the bearer token is not provided, but the syntax should make things a bit clearer). The code closely resembles what I've already made in Power BI as well so hopefully this helps things a bit?
import requests
from time import sleep
########################################
# GLOBAL VARIABLES
########################################
_event_types_filter = "assign&q[types]=archive&q[types]=comment&q[types]=inbound&q[types]=outbound&q[types]=forward&q[types]=tag&q[types]=out_reply"
_after = 1667506000
_page_limit = 100
_bearer_token = "Bearer XXXXXXXXXXXXXXXXXXXXXXXXXX"
_init_base_url = "https://api2.frontapp.com"
_relative_path = "events"
_init_body = "limit=" + str(_page_limit) + "&q[after]=" + str(_after) + "&q[types]=" + _event_types_filter
_headers = {'authorization': _bearer_token}
_init_query_url = _init_base_url + "/" + _relative_path + "?" + _init_body
########################################
# FUNCTION - Query the API
########################################
def Func_Query_Front_API(input_url):
#print(input_url)
# Deconstruct the url into its separate parts
splitted_input_url = input_url.split("?")
input_url_base = splitted_input_url[0].replace("/events", "")
input_url_body_params = splitted_input_url[1]
# Query the API
response = requests.request("GET",
input_url_base + "/" + _relative_path + "?" + input_url_body_params,
headers=_headers)
# Get the "next" link from the response
next = response.json()["_pagination"]["next"]
# Return the response and the "next" link
return response, next
########################################
# MAIN PROGRAM START
########################################
# List to add the response data's to
Source = []
# Make the initial request and add the results to the list
init_response, next = Func_Query_Front_API(_init_query_url)
Source.append(init_response)
# While the "next" link(s) are not None, query the current
# "next" link for the next page(s) of the query
while next != None:
response, next = Func_Query_Front_API(next)
Source.append(response)
sleep(1)
print(Source)
print("Done!")
I have been using soapui opensource for a small period and not yet good at groovy script. Please help figuring out the following issue:
I get response from the previous test step. Lets say Response1 and need to parse it in order to get Id value from it. Then I need to add string DomainId before this id so that it looked smth like this:
DomainId_234565
and tranfer it to next request.
Could someone please explain how to do it with groovy? (I guess it is the best way to do it)
Thank you
Managed to resolve myself. Add property step response where I store response from previous step and also added property trasfer step to put response to the property. Then I add groovy script: def groovyUtils = new com.eviware.soapui.support.GroovyUtils( context ) def holder = groovyUtils.getXmlHolder("Properties#response") return "DomainId_ " + holder.getNodeValue("//*:Id") and it works, returns the correct value
I am creating an apllication using google appengine, in which i am fetching a data from the website and storing it in my Database (Data store).Now whenever user hits my application url as "application_url\name =xyz&city= abc",i am fetching the data from the DB and want to show it as json.Right now i am using a filter to fetch data based on the name and city but getting output as [].I dont know how to get data from this.My code looks like this:
class MainHandler(webapp2.RequestHandler):
def get(self):
commodityname = self.request.get('veg',"Not supplied")
market = self.request.get('market',"No market found with this name")
self.response.write(commodityname)
self.response.write(market)
query = commoditydata.all()
logging.info(commodityname)
query.filter('commodity = ', commodityname)
result = query.fetch(limit = 1)
logging.info(result)
and the db structure for "commoditydata" table is
class commoditydata(db.Model):
commodity= db.StringProperty()
market= db.StringProperty()
arrival= db.StringProperty()
variety= db.StringProperty()
minprice= db.StringProperty()
maxprice= db.StringProperty()
modalprice= db.StringProperty()
reporteddate= db.DateTimeProperty(auto_now_add = True)
Can anyone tell me how to get data from the db using name and market and covert it in Json.First getting data from db is the more priority.Any suggestions will be of great use.
If you are starting with a new app, I would suggest to use the NDB API rather than the old DB API. Your code would look almost the same though.
As far as I can tell from your code sample, the query should give you results as far as the HTTP query parameters from the request would match entity objects in the datastore.
I can think of some possible reasons for the empty result:
you only think the output is empty, because you use write() too early; app-engine doesn't support streaming of response, you must write everything in one go and you should do this after you queried the datastore
the properties you are filtering are not indexed (yet) in the datastore, at least not for the entities you were looking for
the filters are just not matching anything (check the log for the values you got from the request)
your query uses a namespace different from where the data was stored in (but this is unlikely if you haven't explicitly set namespaces anywhere)
In the Cloud Developer Console you can query your datastore and even apply filters, so you can see the results with-out writing actual code.
Go to https://console.developers.google.com
On the left side, select Storage > Cloud Datastore > Query
Select the namespace (default should be fine)
Select the kind "commoditydata"
Add filters with example values you expect from the request and see how many results you get
Also look into Monitoring > Log which together with your logging.info() calls is really helpful to better understand what is going on during a request.
The conversion to JSON is rather easy, once you got your data. In your request handler, create an empty list of dictionaries. For each object you get from the query result: set the properties you want to send, define a key in the dict and set the value to the value you got from the datastore. At the end dump the dictionary as JSON string.
class MainHandler(webapp2.RequestHandler):
def get(self):
commodityname = self.request.get('veg')
market = self.request.get('market')
if commodityname is None and market is None:
# the request will be complete after this:
self.response.out.write("Please supply filters!")
# everything ok, try query:
query = commoditydata.all()
logging.info(commodityname)
query.filter('commodity = ', commodityname)
result = query.fetch(limit = 1)
logging.info(result)
# now build the JSON payload for the response
dicts = []
for match in result:
dicts.append({'market': match.market, 'reporteddate': match.reporteddate})
# set the appropriate header of the response:
self.response.headers['Content-Type'] = 'application/json; charset=utf-8'
# convert everything into a JSON string
import json
jsonString = json.dumps(dicts)
self.response.out.write( jsonString )
I am trying to make a test in Postman to verify some content in a JSON response. If I just try to verify a single line from the JSON response everything is fine. My problem starts when I need to test multiple lines of the JSON response. Is always failing. Any suggestion?
tests["Body matches string"] = responseBody.has("\"name\": null,
\"nameType\": \"NON_REFUNDABLE\"");
If I understand your question correctly I'd like to suggest that you approach this in a different way.
Instead of looking at the entire response body and seeing if the strings match you could alternatively test the individual Json properties that make up the response body. For example you could do the following:
var data = JSON.parse(responseBody);
tests["name is null"] = data.name === null;
tests["nameType is non-refundable"] = data.nameType === "NON_REFUNDABLE";
There are other alternatives as well but this is the first that comes to mind. For some more ideas about testing using postman check out their documentation and examples.
I am trying to delete a client object in my program and then also delete the object in activeCollab using the API provided. I can delete the object but I keep getting a 404 error when it calls the API. I did a print for c.id and I am getting the correct ID, and if I replace ':company_id' in the req statement with the actual ID of the client, it works.
Here is my code for the delete:
def deleteClient(request, client_id):
c = get_object_or_404(Clients, pk = client_id)
#adding the params for the request to the aC API
params = urllib.urlencode({
'submitted':'submitted',
'company[id]': c.id,
})
#make the request
req = urllib2.Request("http://website_url/public/api.php?path_info=/people /:company_id/delete&token=XXXXXXXXXXXXXXXXXXXX", params)
f = urllib2.urlopen(req)
print f.read()
c.delete()
return HttpResponseRedirect('/clients/')
Thanks everyone.
Oh here is the link to the API documentation for the delete:
http://www.activecollab.com/docs/manuals/developers/api/companies-and-users
From the docs it appears that :company_id is supposed to be replaced by the actual company id. This replacement won't happen automatically. Currently you are sending the company id in the POST parameters (which the API isn't expecting) and you are sending the literal value ':company_id' in the query string.
Try something like:
url_params=dict(path_info="/people/%s/delete" % c.id, token=MY_API_TOKEN)
data_params=dict(submitted=submitted)
req = urllib2.Request(
"http://example.com/public/api.php?%s" % urllib.urlencode(url_params),
urllib.urlencode(data_params)
)
Of course, because you are targeting this api.php script, I can't tell if that script is supposed to do some magic replacement. But given that it works when you manually replace the :company_id with the actual value, this is the best bet, I think.