how to run any specific request from collection in postman? - postman

I have a collection with 5 subfolders. Let's say 1, 2, 3, 4, 5. Every folder contains at least 1 request. I want to run one request from folder 3 only. Is it possible to run it via terminal?

You can use this command to target the specific folder or multiple folders:
--folder <name>
Run requests within a particular folder/folders in a collection. Multiple folders can be specified by using --folder multiple times, like so:
--folder f1 --folder f2

Related

How do I view the contents of my build artifact folder in Azure Dev Ops?

I am trying to modify my configuration file, dataSettings.json, located somewhere inside the build artifacts folder. Figuring out the correct access path to it is like working in the dark. Using "**/dataSettings.json" as a path doesn't work in my task since I don't know the artifact's folder structure, nor wether dataSettings.json even exists.
Is there a way to quickly view the contents of a build artifacts folder within DevOps?
Add a script step in your shell scripting language of choice (bash, PowerShell, Windows command prompt, etc) that recursively outputs the directory structure. Specific commands are easy to Google. i.e. PowerShell would be gci -rec. DOS would be dir /s. Bash would be ls -R.
You can quickly view the contents of the artifacts in many of the tasks in your release pipeline.
For example, If you are using File transform task or Azure App Service deploy task. You can click the 3dots at the right end of the Package or folder field to view the contents and folder structure of the artifacts.
The Source Folder field of Copy files tasks for example:
If the artifacts is a zip file. You can navigate to its correponding build pipeline runs and download the artifacts locally to check its contents. You can download the build artifacts at the Build summary page.

Syncing files between projects/buckets in Google Cloud Storage

I am trying to synchronize files between two projects and two buckets on Google Cloud.
However, I would like to only copy files that are not in A but not in B (destination). It is fine to overwrite files that are both in A and B (preferred).
When I do the following:
in my bucket, I create a folder test and add the folder A with inside file-1
I run the following command: gsutil cp -r gs://from-project.appspot.com/test gs://to-project.appspot.com/test2
This works fine, and I have the folder A within the folder test2 in my to-project bucket.
Then the problem occurs:
I add a folder B and within folder A I delete file-1 and add file-2 (to test the notion of a file is in A but not in B).
When I run the same command however, I do not get that only file-2 gets copied and I have an additional folder B, but instead I get a new folder within test2 named test where inside I find A and B but without file-1 in a (basically a replica of the new situation).
Why does this happen and how can I prevent this to enable the syncing?
gsutil rsync command is preferred to synchronize content of two buckets.
You can use the -d option to delete files under your destination bucket that have been not found under the source bucket. Be careful though, because it can delete files in the destination bucket.

Getting Data From A Specific Website Using Google Cloud

I have a machine learning project and I have to get data from a website every 15 minutes. And I cannot use my own computer so I will use Google cloud. I am trying to use Google Compute Engine and I have a script for getting data (here is the link: https://github.com/BurkayKirnik/Automatic-Crypto-Currency-Data-Getter/blob/master/code.py). This script gets data every 15 mins and writes it down to csv files. I can run this code by opening an SSH terminal and executing it from there but it stops working when I close the terminal. I tried to run it by executing it in startup script but it doesn't work this way too. How can I run this and save the csv files? BTW I have to install an API to run the code and I am doing it in startup script. There is no problem in this part.
Instances running in Google Cloud Platform can be configured with the same tools available in the operating system that they are running. If your instance is a Linux instance, the best method would be to use a cronjob to execute your script repeatedly at your chosen interval.
Once you have accessed the instance via SSH, you can open the crontab configuration file by running the following command:
$ crontab -e
The above command will provide access to your personal crontab configuration (for the user you are logged in as). If you want to run the script as root you can use this instead:
$ sudo crontab -e
You can now edit the crontab configuration and add an entry that tells cron to execute your script at your required interval (in your case every 15 minutes).
Therefore, your crontab entry should look something like this:
*/15 * * * * /path/to/you/script.sh
Notice the first entry is for minutes, so by using the */15, you are telling the cron daemon to execute the script once every 15 minutes.
Once you have edited the crontab configuration file, it is a good idea to restart the cron daemon to ensure the change you made will take place. To do this you can run:
$ sudo service cron restart
If you would like to check the status to ensure the cron service is running you can run:
$ sudo service cron status
You script will now execute every 15 minutes.
In terms of storing the CSV files, you could either program your script to store them on the instance, or an alternative would be to use Google Cloud Storage bucket. File can be copied to buckets easily by making use of the gsutil (part of Cloud SDK) command as described here. It's also possible to mount buckets as a file system as described here.

How to sync folders in Ubuntu 16.04 on AWS

I want to do a two directional sync between /var/www/html/data and /var/www/data so that if there are changes on any of those folder they are automatically the same automatically. what's the best way to do this?
The easiest way would be to create a symbolic link.
ln -s /var/www/html/data /var/www/data
This will basically expose the same content in different paths, but the content is the same. In many cases when deploying web apps normally a symlink is created after the deploy process has finished.
If you want duplicated copies you could use rsync.
rsync -av /var/www/html/data /var/www/data

Do multiple casperjs instances share same cookies

I am a little confused with how multiple casperjs instances work simultaneously.
My understanding is if we have "casperjs" c.1.js, c.2.js, ..., c.x.js (they have same code) then it will create multiple processes and they should manage resources individually, like separate cookie files. If we just have "casperjs" c.x.js file multiple times, it will share same cookie file.
Is my understanding right?
Thanks for any input.
Each phantomjs instance has its own object phantom.cookies,
if you run casperjs c.x.js multiple times, each instance will have its own cookies, if you want to store these cookies separately, you can use such bash script:
#!/bin/bash
# run it, e.g.: ./test.sh 10 snap.js // 10 times snap.js
export PHANTOMJS_EXECUTABLE=/tmp/casperjs/phantomjs # ln -sf /tmp/casperjs/phantomjs /usr/local/bin
# export SLIMERJS_EXECUTABLE="/root/slimerjs-0.9.5/slimerjs" # ln -sf /root/slimerjs-0.9.5/slimerjs /usr/local/bin
num=0
while [ "$num" != "$1" ]
do
let "num++"
echo instance_"$num" >>/root/t
/tmp/casperjs/bin/casperjs --cookies-file=/root/casperjs/cookies_"$num".txt /root/casperjs/"$2" >>/root/t &
echo "$num $1 $2"
done
exit 0
By doing so, you will have several workers that will use cookie separately.
SlimerJS:
Cookies are stored in a sqlite database in the mozilla profile. If you want to have persistent cookies, you cannot indicate a file like for PhantomJS, but you should create a permanent profile. See profiles.
Read also:
https://docs.slimerjs.org/current/api/cookie.html#cookie
https://docs.slimerjs.org/current/api/phantom.html#phantom-cookies