I need to make a request for a CSS file.
I know which folder on the server my file will be in.
What I don't know is the exact file name. It will be titled of the form theme.bundle.xxxxxxxxxxxxxx.css where xxxxxxxxxxxxxx is a series of random characters and numbers generated at build time.
My question is, is it possible to make an HTTP request with a regex to get the name of the matching file(s)? I don't need help constructing the regex itself, but rather how to utilize one in combination with an HTTP request.
I can't find any information related to the usage of regular expressions to construct an HTTP request, or if this is even possible.
Short answer: Not possible, unless you have access to customize your server. You tagged this question as an "angular" question. From an Angular standpoint - Angular can't make this happen.
Longer answer: Totally possible! But this ends up being more of a backend question, not an Angular question. You didn't specify which backend you have, so I'll use a Node/Express server as an example. Part of building a server is setting up routing and API endpoints. Consider this code that responds with a particular file whenever the server receives a GET request to /images/background
app.get('/images/background', function(req, res) {
res.sendFile('public/img/background.png')
})
For your situation, you would need to set up an endpoint with similar logic to this:
app.get('/getMyCssFile', function(req, res) {
// Use NodeJS fs module to loop over files in /testfolder and read the file names
let matchingFile;
fs.readdirSync(testFolder).forEach(file => {
console.log(file);
// Perform REGEX matching here, if filename matches, then save this file name
if (matches) {
matchingFile = file;
}
})
if (matchingFile) {
res.sendFile(file)
} else {
// handle sending error - no matching file found
}
})
On your Angular frontend, you'd just need to request /getMyCssFile, and your server will respond with the matching file.
Related
In Postman, I can create a set of common tests that run after every endpoint in the collection/folder Tests tab, like so:
pm.test("status code is 200", function () {
pm.response.to.have.status(200);
});
But how should I do this for my schema validation on the response object? Each endpoint has a different expected schema. So I have something like this on each individual endpoint:
const schema = { type: 'array', items: ... }
pm.test('response has correct schema', function () {
const {data} = pm.response.json();
pm.expect(tv4.validate(data, schema)).to.be.true;
});
I can't extract this up to the collection level because each schema is different.
Now I find that I want to tweak that test a little bit, but I'll have to copy-and-paste it into 50 endpoints.
What's the recommended pattern here for sharing a test across endpoints?
I had the same issue some years ago, and I found two ways to solve this:
Create a folder for each structure object
You can group all your test cases that share the same structure into a new folder and create a test case for this group. The issue with this is that you will be repeating the requests in other folders. (For this solution, you will need to put your "tests cases" into the folder level)
Create a validation using regular expressions (recommended)
Specify in the name of each request a set of rules (these rules will indicate what kind of structure or call they might have). Then you create a validation in the first parent folder for each type of variation (using regex). (You will need to create documentation and some if statements in your parent folder)
E.g.: [POST] CRLTHM Create a group of homes
Where each initial is meaning to:
CR: Response must be 201
LT: The response must be a list of items
HM: The type of the response must be a home object
And the regex conditional must be something like this (this is an example, please try to make your regex accurate):
if(/CRLTHM\s/.test(pm.info.requestName))
(In this image, NA is referring just to Not Authenticated)
I've read Match all URLs except certain URLs in Chrome Extension but its accepted solution is only applicable to content scripts within the manifest file.
In
background.js
I have the following working code:
chrome.webRequest.onBeforeSendHeaders.addListener(handler, {urls: ["<all_urls>"] }, ["blocking", "requestHeaders"]);
Question arises because I need it to apply to all urls except one ( *://*.facebook.com/* ).
How to create this exception?
If the only solution is regex could you please guide me on how to properly write it?
The filters you can to supply when calling addListener for webRequest events, i.e. the RequestFilter type, use match patterns (where the docs say this). Match patterns do not have the capability to do what you desire: exclude specific URLs, while allowing all others.
You will have to perform the exclusion in your webRequest handler by checking the url property of the details Object which is passed to your listener. Doing so can be something like:
function listenForWebrequest(details){
if(/^[^:]*:(?:\/\/)?(?:[^\/]*\.)?facebook.com\/.*$/.test(details.url)) {
//The URL matched our exclusion criteria
return;
} //else
//Do your normal event processing here
}
chrome.webRequest.onBeforeSendHeaders.addListener(listenForWebrequest, {
urls: ["<all_urls>"]
}, ["blocking", "requestHeaders"]);
I am trying to pass an encrypted token from an external application into Application Express. I want to read and work with this token in a custom authentication scheme as a way to authenticate the user into the application.
What is the best way to do this? At first, I was trying to just append the token onto the URL, eg:
/pls/apex/f?p=999:1&Token=XXXXXXXX
But then Apex returns a 404.
So then, I was trying to use the Application Express session values to send in the token, creating a URL like this:
f?p=999:1:::::TOKEN:XXXXXXXX
And then my sentry function I would do something like:
v_token := V('TOKEN')
To get it. However, this isn't working either, and I think's because the session isn't established yet when the sentry function executes? And is it even possible to do it this way? (Since there would be no item with this name, and no page yet to create it on...)
Is there a better approach to doing what I'm trying to do? If I had this added as a HTTP Header upstream, can I read that somehow in the sentry function? Maybe with owa_util.get_cgi_env? Does that work to read HTTP Headers from the request?
Thank you
If anyone else runs into something like this - I figured out a workaround.
Just put the token in the "value" session variables section of the URL, like so
f?p=999:1::::::XXXXXXXX
Then in the "sentry function" I can get the entire query string like this:
v_query_str := owa_util.get_cgi_env('QUERY_STRING');
And then I can split v_query_str by : and get the 8th token, which is what I need.
I found some examples using apex_util.string_to_table to split the string, which works nicely.
I am looking for a REST service that I could use in demo code. I'd like the service:
To take at least one parameter (as a request parameter, or XML POSTed as the body of the HTTP request).
To return the result as XML (not JSON).
To be accessible anonymously (I'll call the service in sample code, so I don't want to put my key in the code, or request users to get a key).
When the Twitter API supported XML (not just JSON), I was typically using their search API. But really anything mainstream enough, easy enough to understand will do (information about zip code, weather for a city…).
If you are using .Net, why don't you just create a tiny MVC application that has a controller that exposes a method that returns some sort of formatted XML? That way you can run the whole thing locally.
EDIT:
You know, I think you can use Google Maps API without a key. I created a test project a couple of days ago. Here is a .Net code snippet (only included so that you can see how I am calling the service):
private static string GetString(Uri requestUri)
{
var output = string.Empty;
var response = WebRequest.Create(requestUri).GetResponse();
var stream = response.GetResponseStream();
if (stream != null)
{
using (var reader = new StreamReader(stream))
{
output = reader.ReadToEnd();
reader.Close();
}
}
response.Close();
return output;
}
I pass in a uri with a url:
https://maps.googleapis.com/maps/api/directions/xml?mode=walking&origin={0},{1}&destination={2},{3}&sensor=false
Where {0},{1} are the first lat/long, and {2},{3} are the second. I am not attaching a key to this and it worked for testing. My method returns a string that later I handle like so:
var response = XDocument.Parse(GetString(request));
which gives me back xml. Again, I still recommend just creating your own web app and then deploying it somewhere publicly accessible (either in a LAN or on the web), but if you just need a web service to return XML you can use that.
The Yahoo! Weather API can be used for this. It takes a location as a request parameter and returns the weather forecast for that location as XML. It also returns weather information as HTML, which you could display as-is to the user. You can see an example of this below. Also make sure that you respect the term of use described at the bottom of the Weather API documentation page.
I would like to fetch a source of file and wrap it within JSONP.
For example, I want to retrieve pets.txt as text from a host I don't own. I want to do that by using nothing but client-side JavaScript.
I'm looking for online service which can convert anything to JSONP.
YQL
Yahoo Query Language is one of them.
http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20html%20where%20url%3D"http://elv1s.ru/x/pets.txt"&format=json&callback=grab
This works if URL is not blocked by robots.txt. YQL have respect to robots.txt. I can't fetch http://userscripts.org/scripts/source/62706.user.js because it blocked via robots.txt.
http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20html%20where%20url%3D"http://userscripts.org/scripts/source/62706.user.js"&format=json&callback=grab
"forbidden":"robots.txt for the domain disallows crawling for url: http://userscripts.org/scripts/source/62706.user.js"
So I'm looking for another solutions.
I built jsonpwrapper.com.
It's unstable and slower than YQL, but it doesn't care about robots.txt.
Here's another one, much faster, built on DigitalOcean & CloudFlare, utilizing caching et al: http://json2jsonp.com
Nononono. No. Just please; no. That is not JSONP, it is javascript that executes a function with an object as its parameter that contains more javascript. Aaah!
This is JSON because it's just one object:
{
'one': 1,
'two': 2,
'three':3
}
This is JSONP because it's just one object passed through a function; if you go to http://somesite/get_some_object?jsonp=grab, the server will return:
grab({
'one': 1,
'two': 2,
'three':3
});
This is not JSON at all. It's just Javascript:
alert("hello");
And this? Javascript code stored inside a string (ouch!) inside an object passed to a function that should evaluate the string (but it might or might not):
grab({"body": "alert(\"Hello!\");\n"});
Look at all those semicolons and backslashes! I get nightmares from this kind of stuff. It's like a badly written Lisp macro because it's much more complicated than it needs to (and should!) be. Instead, define a function called grab in your code:
function grab(message) {
alert(message.body);
}
and then use JSONP to have the server return:
grab({body: "Hello!"});
Don't let the server decide how to run your web page Instead, let your web page decide how to run the web page and just have the server fill in the blanks.
As for an online service that does this? I don't know of any, sorry
I'm not sure what you're trying to do here, but nobody will use something like this. Nobody is going to trust your service to always execute as it should and output expected JavaScript code. You see Yahoo doing it because people trust Yahoo, but they will not trust you.