This is Something I am trying my head around but I am not sure why I am not able to grasp this thing. I am reading articles and wiki of taffy but somehow I am getting it. Like this:
https://github.com/atuttle/Taffy/tree/master/examples/api_requireApiKey
I am going down by the following approach:
I have created a table called as: apikeys, The table has just 3 columns, authid,apikey,authtoken
now if i manually add some key and token, how should i pass it to my requests to call it from 3rd party application.
and eventually this is hardcoded, how can i change my token every few hours and if application detects that token has been changed or not matching, what should i call to re-authenticate it again with new token and keep it alive for next few hours
If rather than solving, someone can point me to right direction, that will be great.
I tried reaching the author of taffy, but he is quite busy to answer the questions i have, everyone is pointing me to docs and docs does not tell me anything about this section.
Related
Ok, I've been using Google Cloud Platform for some video files
that are are viewable from a few web pages I built. I started this two or three years ago, and I have loved it.
But, now it appears they broke it, without warning/telling us.
So, in the platform's console, yesterday (for first time since a month or two ago), I uploaded another video...that part went fine. But, when it came time to click on the checkbox to grant public access, the checkbox is now GONE. (The only part of the UI that looks NEW,
is the column labeled 'public access'. Instead of just a check-box to toggle on or off, now there's a yellow-triangle and an oval-shaped symbol. Once or twice, I was able to get a popup to appear saying 'edit permission', but that quickly led into the weeds.)
After half an hour or so, I finally thought to call platform support, and explained my problem to a guy (with just enough Australian accent to cause me to have to ask for repeats quite a bit...sigh).
So, they logged me a case# and I suggested I was headed to bed, and asked that we now use email (rather than the phone) to continue. Just before bed, I got the case#, and a query about whether it was ok for them to 'change my console'. I replied to the email, saying yes, and went to bed.)
So that was last nite. This morning, re-reading their email, it seems to say that it could be 3 or 4 days, before a more technical person will contact me.
Some re-reading their platform-console docs, I'm now GUESSING that maybe they just nuked the public-access checkbox, and that now I'm supposed to spend hours (days?) taking a short-course on IAM-permmissions, and learn some new long-winded method.
(This whole mess could have been avoided, if they'd just emailed us an informational warning of this UI-change, with some new 5-step short list or tutorial of how to learn to use their 'new, much more complicated,
way to specify public-access'. From where I sit, this change is equivalent to Microsoft saying 'instead of that checkbox, you'll need to learn to make registry edits...see our platform docs on how to do that.)
Right now, I have more than half-a-mind, to seriously consider bailing out of Google's cloud storage, and consider switching to one of the others. But, I'm not quite ready yet, to make that jump (from the frying-pan into the fire?). :^)
Anyone else been down this road? What meeting did I miss? Is there a quicker way out of my dilemma, than just waiting for Google-support to get back to me?
It looks like the change you mention was introduced on July 18th. I’m not sure why, but judging by the change description, it looks like it is aimed to avoid accidentally making sensitive information public: “Objects can no longer be made public through one-click actions”.
You can find the procedure to make a single object public here. It can be achieved through the Console and won't take you more than a few minutes. Once the object is shared publicly, you can use the icon in the “public access” column to get the URL for the object.
You can also make all the content of a bucket public using a similar approach.
When you upload your objects into a bucket, you can upload with ACL as publicRead
and all your objects will have public URL.
public async Task UploadObjectAsync(string bucketName, string objectName, Stream source, string contentType = "image/jpeg")
{
var storage = StorageClient.Create();
await storage.UploadObjectAsync(bucketName, objectName, contentType, source, new UploadObjectOptions()
{
PredefinedAcl = PredefinedObjectAcl.PublicRead
});
}
As I suspected. (I still wonder if they even considered sending an email to each registered/existing customer.)
Ok, yes, (finally, after some practices), this solves it! Thx for those two answers.
(But in my view, their UI-change is still a work-in-progress) So, I have a SUGGESTION for ya, Google. Once one is into the permissions-edit-dialog, and remembers to do an 'add', there's are the 3 fields. The first and third are fine...drop-downs with choices. But that middle entry needs work...how about doing something like an auto-guess-ahead...initialize the field to a suggested value of 'allUsers', so we don't have to remember what to type and how to spell it, or something along those lines.
EDIT: [Actually, it ought to be possible to make that field a drop-down-list choice, with 'allUsers' as one suggested value, and a second value as a text-entry (for specific user-names, etc).]
Unfortunately, 8 Ball Pool it is not possible to list files Google Hangouts without access to the Omegle bucket that contains them. This is due to the current design of the library, which requires that the bucket is loaded before listing its files.
Disclaimers:
This is oriented towards Prestashop 1.5 but if the answer is: "this
is fixed in version 1.x" then I'll raise a petition to update our
shop.
I'm also tagging it as REST because I think I explained it throughly
enough so you don't need actual experience with Prestashop to understand it.
So in Prestashop we have this Web Services which lack support for use cases as simple as search by category.
1. Let's say you want to get all the products from categories 1, 3 and 17:
So what is the solution?
Well, you can do something in the lines of this answer: https://stackoverflow.com/a/34061229/4209853
where you get all the products from categories 1, 3 and 17 and then you make another call for products filtering by those ID's.
'filter[id]' => '['.implode('|',$productIDsArrayIGotBefore).']',
it's ugly and 20th centurish, but well... if it gets the job done... except it doesn't.
You see, this is a call for getting a resource, and someone somewhere decided:
Hey, we have all this nice HTTP action verbs, so let's use them for REST CRUD interfaces: POST for C, GET for R, PUT for U and DELETE for D. Neat.
And that's nice and all, but when combined with the lack of expressive power of Prestashop's Web Services means it's stupidly easy to run into, you guessed it? Yes, 414.
Error HTTP 414 Request URI too long
and we all know that modifying Apache so it accepts longer request URIs is not the neat scalable solution.
So we could try to split the array and make multiple calls, which is just conceptually ugh. Not just because of the performance hit of making multiple queries, but also because we would need to take into account the number of characters of all IDs concatenated to make the calculation of how many we can (safely) ask for in one call. And all that would have their own caveats, like:
2. What if we want to also filter them e.g. active=1?
Now we're in for a ride, because we can't know beforehand how many calls we will need to make.
Let's define:
N are the IDs I got from categories
n is the number of IDS I can safely ask for
T is the number of (filtered) products I want
tare the (filtered) products I already have
k are the (filtered) products we receive from the call
So we would end up with something like:
do{
n0= max(T-t, n);
k= get(products, n0);
t +=k;
}while(count(k)!=0 and count(t)<T and !empty(N))
..which is just... bonkers.
The only elegant solution I can come up with is creating a new Prestashop Web Service that acts as a wrapper, receiving the petition through POST and forwarding it to the Prestashop service.
But, befores that... do you have a better solution using some kind of RESTomancy I may be missing?
I'm designing a REST API where, amongst others, there are two objects.
Journey
Report
For each Journey there are many Reports enroute, and each Report has exactly one associated Journey.
A user might create a Journey using the API as follows...
POST /journey/
Then retrieve the details...
GET /journey/1226/
The first question is, if a user wanted to post an Report to their Journey, which is the 'correct' URL structure that the API should impose? This seems intuitive to me...
POST /journey/1226/report/
...or is this the case...
POST /report/
...whereby in the latter, the Journey ID is passed in the request body somewhere?
The second question is, how might one go about implementing the first case in a tool such as the Django REST framework?
Thanks!
The URL/URI structure is almost completely irrelevant. It is nice to be able to read it, or easily change or even guess it, but that is it. There is no "requirement" official or unwritten how they should look like.
The point is however, that you supply the URIs to your clients in your responses. Each GET will get you a representation that contains links to the next "states" that your client can reach. This means the server has full control over URI structure, the client usually has to only know the "start" or "homepage" URI, and that's it.
Here is an article which discusses this question, has some good points: http://www.ben-morris.com/hackable-uris-may-look-nice-but-they-dont-have-much-to-do-with-rest-and-hateoas/
Pass for the second question :) I didn't use that particular framework.
I'm helping develop a new API for an existing database.
I'm using Python 2.7.3, Django 1.5 and the django-rest-framework 2.2.4 with PostgreSQL 9.1
I need/want good documentation for the API, but I'm shorthanded and I hate writing/maintaining documentation (one of my many flaws).
I need to allow consumers of the API to add new "POS" (points of sale) locations. In the Postgres database, there is a foreign key from pos to pos_location_type. So, here is a simplified table structure.
pos_location_type(
id serial,
description text not null
);
pos(
id serial,
pos_name text not null,
pos_location_type_id int not null references pos_location_type(id)
);
So, to allow them to POST a new pos, they will need to give me a "pos_name" an a valid pos_location_type. So, I've been reading about this stuff all weekend. Lots of debates out there.
How is my API consumers going to know what a pos_location_type is? Or what value to pass here?
It seems like I need to tell them where to get a valid list of pos_locations. Something like:
GET /pos_location/
As a quick note, examples of pos_location_type descriptions might be: ('school', 'park', 'office').
I really like the "Browseability" of of the Django REST Framework, but, it doesn't seem to address this type of thing, and I actually had a very nice chat on IRC with Tom Christie earlier today, and he didn't really have an answer on what to do here (or maybe I never made my question clear).
I've looked at Swagger, and that's a very cool/interesting project, but take a look at their "pet" resource on their demo here. Notice it is pretty similar to what I need to do. To add a new pet, you need to pass a category, which they define as class Category(id: long, name: string). How is the consumer suppose to know what to pass here? What's a valid id? or name?
In Django rest framework, I can define/override what is returned in the OPTION call. I guess I could come up with my own little "system" here and return some information like:
pos-location-url: '/pos_location/'
in the generic form, it would be: {resource}-url: '/path/to/resource_list'
and that would sort of work for the documentation side, but I'm not sure if that's really a nice solution programmatically. What if I change the resources location. That would mean that my consumers would need to programmatically make and OPTIONS call for the resource to figure out all of the relations. Maybe not a bad thing, but feels like a little weird.
So, how do people handle this kind of thing?
Final notes: I get the fact that I don't really want a "leaking" abstaction here and have my database peaking thru the API layer, but the fact remains that there is a foreign_key constraint on this existing database and any insert that doesn't have a valid pos_location_type_id is raising an error.
Also, I'm not trying to open up the URI vs. ID debate. Whether the user has to use the pos_location_type_id int value or a URI doesn't matter for this discussion. In either case, they have no idea what to send me.
I've worked with this kind of stuff in the past. I think there is two ways of approaching this problem, the first you already said it, allow an endpoint for users of the API to know what is the id-like value of the pos_location_type. Many API's do this because a person developing from your API is gonna have to read your documentation and will know where to get the pos_location_type values from. End-users should not worry about this, because they will have an interface showing probably a dropdown list of text values.
On the other hand, the way I've also worked this, not very RESTful-like. Let's suppose you have a location in New York, and the POST could be something like:
POST /pos/new_york/
You can handle /pos/(location_name)/ by normalizing the text, then just search on the database for the value or some similarity, if place does not exist then you just create a new one. That in case users can add new places, if not, then the user would have to know what fixed places exist, which again is the first situation we are in.
that way you can avoid pos_location_type in the request data, you could programatically map it to a valid ID.
I am currently trying to develop a web activity that a client would like to track via their Learning Management System. Their LMS uses the AICC standard (HACP binding), and they keep the actual learning objects on a separate content repository.
Right now I'm struggling with the types of communication between the LMS and the "course" given that they sit on two different servers. I'm able to retreive the sessionId and the aicc_url from the URL string when the course launches, and I can successfully post values to the aicc_url on the LMS.
The difficulty is that I can not read and parse the return response from the LMS (which is formatted as plain text). AICC stipulates that the course start with posting a "getParam" command to the aicc_url with the session id in order to retrieve information like completion status, bookmarking information from previous sessions, user ID information, etc, all of which I need.
I have tried three different approaches so far:
1 - I started with using jQuery (1.7) and AJAX, which is how I would typically go about a same-server implementation. This returned a "no transport" error on the XMLHttpRequest. After some forum reading, I tried making sure that the ajax call's crossdomain property was set to true, as well as a recommendation to insert $.support.cors = true above the ajax call, neither of which helped.
2 & 3 - I tried using an oldschool frameset with a form in a bottom frame which would submit and refresh with the returned text from the LMS and then reading that via javascript; and then a variation upon that using an iFrame as a target of an actual form with an onload handler to read and parse the contents. Both of these approaches worked in a same-server environment, but fail in the cross-domain environment.
I'm told that all the other courses running off the content repository bookmark as well as track completion, so obviously it is possible to read the return values from the LMS somehow; AICC is pitched frequently as working in cross-server scenarios, so I'm thinking there must be a frequently-used method to doing this in the AICC structure that I am overlooking. My forum searches so far haven't turned up anything that's gotten me much further, so if anyone has any experience in cross-domain AICC implementations I could certainly use recommendations!
The only idea I have left is to try setting up a PHP "relay" form on the same server as the course, and having the front-end page send values to that, and using the PHP to submit those to the LMS, and relay the return text from the LMS to the front-end iframe or ajax call so that it would be perceived as being within the same domain.... I'm not sure if there's a way to solve the issue without going server-side. It seems likely there must be a common solution to this within AICC.
Thanks in advance!
Edits and updates:
For anyone encountering similar problems, I found a few resources that may help explain the problem as well as some alternate solutions.
The first is specific to Plateau, a big player in the LMS industry that was acquired by Successfactors. It's some documentation that provide on setting up a proxy to handle cross-domain content:
http://content.plateausystems.com/ContentIntegration/content/support_files/Cross-domain_Proxlet_Installation.pdf
The second I found was a slide presentation from Successfactors that highlights the challenge of cross-domain content, and illustrates so back-end ideas for resolving it; including the use of reverse proxies. The relevant parts start around slide 21-22 (page 11 in the PDF).
http://www.successfactors.com/static/docs/successconnect/sf/successfactors-content-integration-turley.pdf
Hope that helps anyone else out there trying to resolve the same issues!
The answer in this post may lead you in the right direction:
Best Practice: Legitimate Cross-Site Scripting
I think you are on the right track with setting up a PHP "relay." I think this is similar to choice #1 in the answer from the other post and seems to make most sense with what you described in your question.