Changing File to Upload With Variables - postman

I am able to create a test to send a file to my API as expected. This works.
File Upload Works
What I need to do now is change the file sent to my API based on data in the JSON data file used for collection runs. How would you accomplish this?

Related

Upload a file to S3 via website, process with Lambda, send file back to user

I'm fairly new to AWS.
What I am trying to do is have a static website hosted on S3. The user would be able to upload a zip file to S3 via the front end. Once received, a lambda function would process the zip file and save the output to a single HTML file in another bucket.
How would I then return the processed file back to the right user via the front end? Do I need to send cookies or something along with the original file and somehow send it through lambda too? Additionally, is there a process that would return the processed file back to the original uploader directly from the processing lambda function?
Thanks All,
Appreciate it :)

Test data to requests for Postman monitor

I run my collection using Test data from a csv file, However there is no option to upload the test data file when adding monitor for the collection. On searching through internet could see that the test data file have to be provided in URL (saved in cloud ..google drive,.). But i couldn't get source for how to provide this URL to the collection . Can anyone please help
https://www.postman.com/praveendvd-public/workspace/postman-tricks-and-tips/request/8296678-d06b3fc0-6b8b-4370-9847-aee0f526e7db
you cannot use csv file in monitor , but could store the content of csv as variable and use that to drive the monitor . An example can be seen in the above public repository

Generating Single Flow file for loading it into S3

I have a Nifi Flow, which fetches a data from RDS tables and load into S3 as flat files, now i need to generate another file which will be having the name of the file that I am loading into S3 bucket, this needs to be a separate flow;
example: if the RDS extracted flat file name is RDS.txt, then the new generated file should have rds.txt as content and I need to load this file to same S3 bucket.
Problem I face is I am using a generate flowfile processor and adding the flat file name as custom text in flowfile, but i could not set up any upstream for Generate flow file processor, so this is generating more files, if I use the merge content processor after the generate flow file processor, I could see duplicate values in the flowfile.
Can anyone help me out in this
I have a Nifi Flow, which fetches a data from RDS tables and load into S3 as flat files, now i need to generate another file which will be having the name of the file that I am loading into S3 bucket, this needs to be a separate flow;
Easiest path to do this is to chain something after PutS3Object that will update the flowfile contents with what you want. It would be really simple to write with ExecuteScript. Something like this:
def ff = session.get()
if (ff) {
def updated = session.write(ff, {
it.write(ff.getAttribute("filename").bytes)
} as OutputStreamCallback)
updated = session.putAttribute(updated, "is_updated", "true")
session.transfer(updated, REL_SUCCESS)
}
Then you can put a RouteOnAttribute after PutS3Object and have it route to either a null route if it detects the attribute is_updated or route back to PutS3Object if it's not been updated.
I got a simple solution for this I have added a funnel before the put s3 object, and upstream of the funnel will receive two file, one with the extract and the other with the file name, down stream of the funnel is connected to the puts3 object, so this will load both the files at the same time

Django how to upload file directly to 3rd-part storage server, like Cloudinary, S3

Now, I have realized the uploading process is like that:
1. Generate the HTTP request object, and set the value to request.FILE by using uploadhandler.
2. In the views.py, the instance of FieldFile which is the mirror of FileField will call the storage.save() to upload file.
So, as you see, django always use the cache or disk to pass the data, if your file is too large, it will cost too much time.
And the design I want to figure this problem is to custom an uploadhandler which will call storage.save() by using input raw data. The only question is how can I modify the actions of FileField?
Thanks for any help.
you can use this package
Add direct uploads to AWS S3 functionality with a progress bar to file input fields.
https://github.com/bradleyg/django-s3direct
You can use one of the following packages
https://github.com/cloudinary/pycloudinary
http://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html

Upload txt file to server

I want create a form to upload files (txt, xls) to the server, not the database.
Does anyone kown any example showing how I can do this?
In order to get the file on to the database server's file system, you would first have to upload the file to the database which it sounds like you are already familiar with. From there, you can use the UTL_FILE package to write the BLOB to the database server's file system.