Postman / Newman and uploading files from Azure Blob instead of local files - postman

I am trying to add my postman scripts to an azure pipeline.
To do this I am trying out newman.
I use the postman api to get the latest collection as well as the correct environment. Using the uid and an api key i have created. All good so far.
However my collection includes some calls that do file uploads.
In postman i tested those by simply selecting the body of the call, selecting form-data and choosing a sample file that is located in the default "postman files" folder.
When testing newman on my local machine, i need to copy all the sample files i want to use for uploads into the same folder that i run newman from.
This solution is not quite right for me though as i use the postman api to get the correct collections and the environments. I need to be able to get those files also from an alternative remote location (such as azure blob storage)
I have found some guides that describe how you can just edit the postman collection file to point the "src" to a remote file. However i cannot find any way to do this directly in postman, in such a way that when newman gets the collection file from the api the correct location is already in the correct place.
"request": {
"method": "POST",
"header": [],
"body": {
"mode": "formdata",
"formdata": [
{
"key": "files",
"type": "file",
"src": "sample.pdf"
}
]
},
Above is the extract from the collection file.
Is there a way i can make that change directly in postman?

Postman scripts have access to files in their working directory. We solved this by having a picture in a folder in our git repo, downloading the scripts to that folder, and referring to that file. This is the task we used in the build pipeline:
- task: CopyFiles#2
inputs:
SourceFolder:
Contents: |
**/PostmanTests/test-image.jpg
TargetFolder: '$(Build.ArtifactStagingDirectory)/postman'
OverWrite: true
flattenFolders: true
You can then use the postman API to download the files to the artifact folder created above.
One trick here is that we used a "container" file. We replaced the filename with {{file}} (instead of something like example.pdf) and passed the actual name in the environment file. See JSON below:
"body": {
"mode": "formdata",
"formdata": [
{
"key": "upload",
"type": "file",
"src": "{{file}}"
}
]
}
The environment file would then have the name of the file, in this case test-image.jpg.

Related

Unzipping files with Google Dataflow Bulk Decompress template?

I am trying to unzip uploaded zip files to Cloud Storage which contains only image files without any other folders inside.
I was able to do that with cloud functions but seems like I get memory-related issues when files get bigger. I found Dataflow templates (Bulk Decompress Cloud Storage Files) for this specific case and tried to run some jobs with similar to below parameters.
{
"jobName": "unique_job_name",
"environment": {
"bypassTempDirValidation": false,
"numWorkers": 2,
"tempLocation": "gs://bucket_name/temp",
"ipConfiguration": "WORKER_IP_UNSPECIFIED",
"additionalExperiments": []
},
"parameters": {
"inputFilePattern": "gs://bucket_name/root_path/zip_to_extract.zip",
"outputDirectory": "gs://bucket_name/root_path/",
"outputFailureFile": "gs://bucket_name/root_path/failure.csv"
}
}
As an output, I only get 1 file with the same name of my zip file without a file extension and with the type of text/plain.
Is this an expected behaviour? If someone could help me to unzip the file with Dataflow, I would be glad.

AWS quicksight can't ingest csv from s3 but the same data uploaded as file works

I am new to quicksight and was just test driving (on the quicksight web console. I'm not using the command line in this entire thing) with some data (can't share, confidential business info). I have a strange issue. when I create a dataset by uploading the file, which is only 50 mb, it works fine and I can see a preview of the table and I am able to proceed to the visualization. But when I upload the same file to the s3 and make a manifest and submit it using the 'use s3' option in the creat dataset window, I get the INCORRECT_FIELD_COUNT error.
here's the manifest file:
{
"fileLocations": [
{
"URIs": [
"s3://testbucket/analytics/mydata.csv"
]
},
{
"URIPrefixes": [
"s3://testbucket/analytics/"
]
}
],
"globalUploadSettings": {
"format": "CSV",
"delimiter": ",",
"containsHeader": "true"
}
}
I know the data is not fully structured with some rows where a few columns are missing but how is it possible for quicksight to automatically infer and put NULLs into shorter rows when uploaded from local machine but not as an s3 file with the manifest? are there some different setttings that i'm missing?
I'm getting the same thing - looks like this is fairly new code. It'd be useful to know what the expected field count is, especially as it doesn't say if it's too few or too many (both are wrong). One of those technologies that looks promising, but I'd say there's a little maturing required.

How to pass query parameters in API gateway with S3 json file backend

I am new to AWS and I have followed this tutorial : https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-s3.html, I am now able to read from the TEST console my AWS object stored on s3 which is the following (it is .json file):
[
{
"important": "yes",
"name": "john",
"type": "male"
},
{
"important": "yes",
"name": "sarah",
"type": "female"
},
{
"important": "no",
"name": "maxim",
"type": "male"
}
]
Now, what I am trying to achieve is pass query parameters. I have added type in the Method Request and added a URL Query String Parameter named type with method.request.querystring.type mapping in the Integration Request.
When I want to test, typing type=male is not taken into account, I still get the 3 elements instead of the 2 male elements.
Any reasons you think this is happening ?
For information, the Resources is the following (and I am using AWS Service integration type to create the GET method as explained in the AWS tutorial)
/
/{folder}
/{item}
GET
In case anyone is interested by the answer, I have been able to solve my problem.
The full detailed solution requires a tutorial but here are the main steps. The difficulty lies in the many moving parts so it is important to test each of them independently to make progress (quite basic you will tell me).
Make sure your SQL query to your s3 DB is correct, for this you can go in your s3 bucket, click on your file and select "query with s3 select" from the action.
Make sure that your lambda function works, so check that you build and pass the correct SQL query from the test event
Setup the API query strings in the Method Request panel and setup the Mapping Template in the Integration Request panel (for me it looked like this "TypeL1":"$input.params('typeL1')") using the json content type
Good luck !

What precautions do I need to take when sharing an AWS Amplify project publicly?

I'm creating a security camera IoT project that uploads images to S3 and will soon offer a UI to review those images. AWS Amplify is being used to make this happen quickly.
As I get started on the Amplify side of things, I'm noticing a config file that has very specifically named attributes and values. The team-provider-info.json file in particular that isn't ignored is very specific:
{
"dev": {
"awscloudformation": {
"AuthRoleName": "amplify-twintigersecurityweb-dev-123456-authRole",
"UnauthRoleArn": "arn:aws:iam::111164163333:role/amplify-twintigersecurityweb-dev-123456-unauthRole",
"AuthRoleArn": "arn:aws:iam::111164163333:role/amplify-twintigersecurityweb-dev-123456-authRole",
"Region": "us-east-1",
"DeploymentBucketName": "amplify-twintigersecurityweb-dev-123456-deployment",
"UnauthRoleName": "amplify-twintigersecurityweb-dev-123456-unauthRole",
"StackName": "amplify-twintigersecurityweb-dev-123456",
"StackId": "arn:aws:cloudformation:us-east-1:111164163333:stack/amplify-twintigersecurityweb-dev-123456/88888888-8888-8888-8888-888838f58888",
"AmplifyAppId": "dddd7dx2zipppp"
}
}
}
May I post this to my public repository without worry? Is there a chance for conflict in naming? How would one pull this in for use in their new project?
Per AWS Amplify documentation:
If you want to share a project publicly and open source your serverless infrastructure, you should remove or put the amplify/team-provider-info.json file in gitignore file.
At a glance, everything else generated by amplify init NOT in the .gitignore file is ok to share. e.g. project-config.json and backend-config.json.
Add this to .gitignore:
# not to share if public
amplify/team-provider-info.json

linking to a google cloud bucket file in a terminal command?

I'm trying to find my way with Google Cloud.
I have a Debian VM Instance that I am running a server on. It is installed and working via SSH Connection in a browser window. The command to start the server is "./ninjamsrv config-file-path.cfg"
I have the config file in my default google firebase storage bucket as I will need to update it regularly.
I want to start the server referencing the cfg file in the bucket, e.g:
"./ninjamsrv gs://my-bucket/ninjam-config.cfg"
But the file is not found:
error opening configfile 'gs://my-bucket/ninjam-config.cfg'
Error loading config file!
However if I run:
"gsutil acl get gs://my-bucket/"
I see:
[
{
"entity": "project-editors-XXXXX",
"projectTeam": {
"projectNumber": "XXXXX",
"team": "editors"
},
"role": "OWNER"
},
{
"entity": "project-owners-XXXXX",
"projectTeam": {
"projectNumber": "XXXXX",
"team": "owners"
},
"role": "OWNER"
},
{
"entity": "project-viewers-XXXXX",
"projectTeam": {
"projectNumber": "XXXXX",
"team": "viewers"
},
"role": "READER"
}
]
Can anyone advise what I am doing wrong here? Thanks
The first thing to verify is if indeed the error thrown is a permission one. Checking the logs related to the VM’s operations will certainly provide more details in that aspect, and a 403 error code would confirm if this is a permission issue. If the VM is a Compute Engine one, you can refer to this documentation about logging.
If the error is indeed a permission one, then you should verify if the permissions for this object are set as “fine-grained” access. This would mean that each object would have its own set of permissions, regardless of the bucket-level access set. You can read more about this here. You could either change the level of access to “uniform” which would grant access to all objects in the relevant bucket, or make the appropriate permissions change for this particular object.
If the issue is not a permission one, then I would recommend trying to start the server from the same .cfg file hosted on the local directory of the VM. This might point the error at the file itself, and not its hosting on Cloud Storage. In case the server starts successfully from there, you may want to re-upload the file to GCS in case the file got corrupted during the initial upload.