Strange behaviour when uploading scaled file using Fine Uloader - coldfusion

I have implemented FileUploader 4.4 and it works perfectly when uploading multiple files using Coldfusion.
The end point code is very simple and looks like this:
<cffile action="upload"
destination="#application.Config.imageDir#"
nameconflict="overwrite"
filefield="FORM.qqFile" />
<cfif CFFILE.contenttype EQ "image" OR ListFindNoCase("jpg,jpeg,gif,png", CFFILE.serverFileExt)>
<cfset local.fileName = CFFILE.serverFile />
</cfif>
Whenever I upload single, or multiple images, the local.filename variable is correctly set to the image file name value that you usually see in qqfilename, for example "image0001.jpg"
The javasript code to send this data is simply:
$('#fine-uploader').fineUploader({
request: {
endpoint: '<cfoutput>#application.Config.fineUploaderProxy#?cfc=#cfcName#&functionName=#functionName#</cfoutput>'
}
});
However, as soon as I add scaling, then a strange behaviour starts occurring. The scaled version is sent to my upload handler, however, the file name that is being sent is always "blob" for every scaled image, instead of the name I would expect, being something like
"image0001(small).jpg"
The code I add to activate the scaling is simply:
$('#fine-uploader').fineUploader({
scaling: {
sizes: [
{name: "small", maxSize: 50}
]
},
request: {
endpoint: '<cfoutput>#application.Config.fineUploaderProxy#?cfc=#cfcName#&functionName=#functionName#</cfoutput>'
}
});
Could someone please help me as to why the filename "blob" is being with the qqfile instead of the actual file name? I am using the latest version of Chrome.
Thanks

This is due to the fact that Fine Uploader doesn't actually send a File when it generates a scaled version off a reference file. The entity is specifically a Blob. While a File has a name property, a Blob does not. It is because of this that the browser, when constructing the multipart segment for a Blob, sets the filename parameter to "blob". There are ways to overcome this, but not reliably cross browser. So, Fine Uploader will always send the actual file name in a "qqfilename" parameter. Your server should look at this value to reliably determine the files name in all cases.

Related

Do Quarto publish on quarto-pub (book format) images from web links?

I want to publish with the command
quarto::quarto_publish_site()
my book-website.
The book-website is already setup on quarto-pub. If I don't add any image as a web link, the website runs and can be uploaded.
Now I add any image as a weblink, this is a exemplary code
![](https://www.website.com/wp-content/uploads/sites/2/2022/04/picture.jpg)
When I render it locally, it works. When I launch the command to publish it
compilation failed- error Unable to load picture or PDF file 'https://www.website.com/wp-content/uploads/sites/2/2022/04/picture.jpg'.
The publishing process is interrupted after this error. This is exactly the same if I launch the command from Terminal.
Is this intended to prevent to publish on quarto-pub links from other websites?
Or I can do something to avoid to download all these pics?
Including images via URL is not supposed to work for PDF output, which is not a Quarto issue but comes from how Pandoc translates !()[] to LaTeX.
Instead, you could automatically generate a local copy of the file (if not available) and then include the image in an R code chunk like this:
```{r, echo=FALSE, fig.cap='Kid', dpi=100}
if(!file.exists("kid.jpg")) {
download.file(
url = "https://edit.co.uk/uploads/2016/12/Image-1-Alternatives-to-stock-photography-Thinkstock.jpg",
destfile = "kid.jpg",
mode = "auto")
}
knitr::include_graphics("kid.jpg")
```
(of course, including the image via !()["kid.jpg"] at different location will work too once the file exists locally.)

Can Postman take a file as a variable from a path?

I have a postman collection, with a set of three API calls I'd like to chain together and feed with a data file using the runner function. Lets say they're:
/prepareUpload
/upload
/confirmUpload
and the output of each is needed for the next step. I'm happily pulling stuff out of the responses and putting them into variables ready for the next call, but the bit I seem to be falling down on is the /upload needs a file parameter of type file, but Postman doesn't seem to let me set it to a variable:
I've tried exporting the collection, manually editing the json to force it to a variable and running that, so something like :
<snip>
{
"key": "file",
"contentType": "{{contentType}}",
"type": "file",
"src": ["{{fullpath}}"]
}
],
"options": {
"formdata": {}
}
where {{contentType}} and {{fullpath}} are coming from my data file, but it never seems to actually do the upload.
Does anyone know if this is possible?
Issue:
In postman if we check the UI, we notice that there is no way to define file path as variable.
This looks like a limitation when we need to run file from different systems
Solution:
The solution is to hack the collection.json file. Open the json and edit the formdata src and replace it with a variable, let say file_path so : {{file_path}}
Now in Postman:
in pre request you can below code to set the path
pm.environment.set("file_path","C:/Users/guest/Desktop/1.png")
You can also save it as environment variable directly or pass through cmd using --env-var when using newman.
Note:
set access file from outside working directory as true (settings from top right corner)
It's not possible to read local files with Postman (There are at least two issues concerning that in there tracker on github: 798, 7210)
A workaround would be, to setup a server that provides the file, so you could get the data via a request to that server.
Ok, so found the answer to this, and the short version is - Postman can't do it, but Newman can :
https://github.com/postmanlabs/newman#file-uploads
It's a fair bit more effort to get it set up and working, but it does provide a solution for automating the whole process.
For Postman (as of Version 9.1.5), on Mac os, you can trick postman by naming a file in your shared directory with your variable name (ie. {{uploadMe}}). Then you choose this file (named as the variable) from the file selector and VoilĂ .
In my case the files I upload are located in the same shared directory and don't forget to set the shared directory in your postman settings.
The solution is quite simple,
Make sure you have the latest version of postman
Go to postman settings to find your working directory and add the desired file to your postman working directory
In the body tab, select formdata
In the pre-request script tab, enter the code below.
pm.request.body.mode = "formdata";
pm.request.body.formdata = {
"key": "preveredKey",
"type": "file",
"src": "fileName.extension"
};

Thumbnail the first page of a pdf from a stream in GraphicsMagick

I know how to use GraphicsMagick to make a thumbnail of the first page of a pdf if I have a pdf file and am running gm locally. I can just do this:
gm(pdfFileName + "[0]")
.background("white")
.flatten()
.resize(200, 200)
.write("output.jpg", (err, res) => {
if (err) console.log(err);
});
If I have a file called doc.pdf then passing doc.pdf[0] to gm works beautifully.
But my problem is I am generating thumbnails on an AWS Lambda function, and the Lambda takes as input data streamed from a source S3 bucket. The relevant slice of my lambda looks like this:
// Download the image from S3, transform, and upload to a different S3 bucket.
async.waterfall([
function download(next) {
s3.getObject({
Bucket: sourceBucket,
Key: sourceKey
},
next);
},
function transform(response, next) {
gm(response.Body).size(function(err, size) { // <--- gm USED HERE
.
.
.
Everything works, but for multipage pdfs, gm is generating a thumbnail from the last page of the pdf. How do I get the [0] in there? I did not see a page selector in the gm documentation as all their examples used filenames, not streams I believe there should be an API, but I have not found one.
(Note: the [0] is really important not only because the last page of multipage PDFs are sometimes blank, but I noticed when running gm on the command line with large pdfs, the [0] returns very quickly while without the [0] the whole pdf is scanned. On AWS Lambda, it's important to finish quickly to save on resources and avoid timeouts!)
You can use .selectFrame() method, which is equivalent to specifying [0] directly in file name.
In your code:
function transform(response, next) {
gm(response.Body)
.selectFrame(0) // <--- select the first page
.size(function(err, size) {
.
.
.
Don't get confused about the name of function. It work not only with frames for GIFs, but also works just fine with pages for PDFs.
Checkout this function source on GitHub.
Credits to #BenFortune for his answer to similar question about GIFs first frame. I've took it as inspiration and tested this solution with PDFs, it actually works.
Hope it helps.

Access S3 from Lucee/Railo/Coldfusion built in functions

I am having trouble accessing an S3 bucket to just list the files using Lucee. I have followed the directions here and here with no luck. I keep getting the error message that the directory does not exist.
This is in my Application.cfc:
this.name="s3-test";
this.mappings = {
"/s3test" = "s3://luceetest/blah"
}
this.s3 = {
"accessKeyid": "XXXXXXXXXXXXXX",
"awsSecretKey": "ZZZZZ/XXXXXX/YYYYY",
"defaultLocation": "Oregon",
};
This is the code I am testing with:
<cfsetting showDebugOutput="Yes">
<cfdirectory action="list" directory="s3://coldlucee/blah" name="blah" recurse="yes" type="file">
<cffile action="write" output="s3 specs" file="s3://coldlucee/blah/test.txt"/>
I have also tried to map it inside of the web interface using the format s3://accessKeyID:awsSecretKey#coldlucee/blah as the resource but it always shows up as red which means it can't be found.
I am hoping someone can help me out with this, it seems so simple in the articles I have read so I might have a configuration error on the Amazon side. I have tried to make the bucket public to no avail though.
I've never been able to get CF's implementation of S3 to work either. I ended up using an S3 REST Wrapper I found here:
https://gist.github.com/CFJSGeek/3f6f14ba86049af75361

Retrieve service information from WFS GetCapabilities request with GeoExt

This is probably a very simple question but I just can't seem to figure it out.
I am writing a Javascript app to retrieve layer information from a WFS server using a GetCapabilities request using GeoExt. GetCapabilities returns information about the WFS server -- the server's name, who runs it, etc., in addition to information on the data layers it has on offer.
My basic code looks like this:
var store = new GeoExt.data.WFSCapabilitiesStore({ url: serverURL });
store.on('load', successFunction);
store.on('exception', failureFunction);
store.load();
This works as expected, and when the loading completes, successFunction is called.
successFunction looks like this:
successFunction = function(dataProxy, records, options) {
doSomeStuff();
}
dataProxy is a Ext.data.DataProxy object, records is a list of records, one for each layer on the WFS server, and options is empty.
And here is where I'm stuck: In this function, I can get access to all the layer information regarding data offered by the server. But I also want to extract the server information that is contained in the XML fetched during the store.load() (see below). But I can't figure out how to get it out of the dataProxy object, where I'm sure it must be squirreled away.
Any ideas?
The fields I want are contained in this snippet:
<ows:ServiceIdentification>
<ows:Title>G_WIS_testIvago</ows:Title>
<ows:Abstract/>
<ows:Keywords>
<ows:Keyword/>
</ows:Keywords>
<ows:ServiceType>WFS</ows:ServiceType>
<ows:ServiceTypeVersion>1.1.0</ows:ServiceTypeVersion>
<ows:Fees/>
<ows:AccessConstraints/>
Apparently,GeoExt currently discards the server information, undermining the entire premise of my question.
Here is a code snippet that can be used to tell GeoExt to grab it. I did not write this code, but have tested it, and found it works well for me:
https://github.com/opengeo/gxp/blob/master/src/script/plugins/WMSSource.js#L37