Postman Collection Format v1 is no longer supported and can not be imported directlyYou may convert ur collection to Formatv2 and try importing again - postman

I need your help. I have this error, I managed to convert the .json file from version 1.0.0 to version 2.0.0.0 with the following command at the prompt
C:\Users\AC\Desktop\test>postman-collection-transformer convert -i test.json -o prueba1.json -j 1.0.0 -p 2.0.0 -P
In the following url is the collection that I converted and the one that after making the changes in Visual, it doesn't update. and when I send the request, the file is not updated with the changes that I specify in Visual Studio Code, I don’t know what could be going wrong. Why Postman doesn't allow version 1.0.0?
Capture Visual
Anaconda run
In this image it should return me an id, not that phrase. It's as if after the conversion something is lost.

it won't update the file it will create a new collection file :
postman-collection-transformer convert -i old_collection.json -o new_collection.json -j 1.0.0 -p 2.0.0 -P
the above command converts the v1 "old_collection.json" file and creates a new_collection.json

Related

Need a better understanding of installing AWS-Nuke for Mac

Im sure this this just a lack of knowledge on my end but im having a hard time understand how to install aws-nuke following there documentation. I have downloaded the latest release but install instructions are not so clear for me https://github.com/rebuy-de/aws-nuke#:~:text=Use%20Released%20Binaries,Run%20%24%20aws%2Dnuke%2Dv2.16.0%2Dlinux%2Damd64
Any suggestions would be appreciated.
On a Mac you can run aws-nuke using the following steps:
Open a terminal (Command + Space and write terminal)
Grab the latest version of the aws-nuke. Currently the latest version is 2.17.0 but obviously this will change in the future. In order to download aws-nuke, we can run the following command:
For M1 Mac:
wget https://github.com/rebuy-de/aws-nuke/releases/download/v2.17.0/aws-nuke-v2.17.0-darwin-arm64.tar.gz
For intel Mac:
wget https://github.com/rebuy-de/aws-nuke/releases/download/v2.17.0/aws-nuke-v2.17.0-darwin-amd64.tar.gz
Extract the package using the following command (note: use amd64 instead of arm64 if you are on an intel Mac):
tar -xvf aws-nuke-v2.17.0-darwin-arm64.tar.gz
Create a config.yml file with the following content and place it nearby the extracted executable:
regions:
- global
account-blocklist:
- "999999999999" # leave it as it is, since the current version wont work if you don't provide a blocklist
accounts:
"000000000000": {} # fill in your own AWS account number
Run the following command:
./aws-nuke-v2.17.0-darwin-arm64 -c config.yml
This should list the resources which might be deleted. If you are ok with the list, append --no-dry-run to the previous command.

Create PDF reports using R Markdown (TinyTeX) in Snakemake using Conda

I am currently having problems using TinyTeX in a conda environment with Snakemake. I have to install TinyTeX installation files using the command tinytex::install_tinytex() before running the pipeline. This installs TinyTeX outside of the created environment (which isn't that big of a problem... but not preferred either) . The main problem is that every time I execute my Snakemake pipeline it will try to reinstall this installation which I don't want. Could anyone tell me what the easiest way is for me to check whether it's installed already? Should I be using the command Rscript -e \"tinytex:::is_tinytex()\" with an if-statement? And what is the best way to write that if-statement by calling Rscript -e in Snakemake? Or should I just write a boolean text-file on first run which specifies whether TinyTeX has been installed before?
It kinda sucks that the TinyTeX conda dependency doesn't work on its own without additional installation...
Snakemake rule (ignore input/output):
rule assembly_report_rmarkdown:
input:
rules.assembly_graph2image_bandage.output,
rules.assembly_assessment_quast.output,
rules.coverage_calculator_shortreads.output,
rules.coverage_calculator_longreads.output
output:
config["outdir"] + "Hybrid_assembly_report.pdf"
conda:
"envs/r-rmarkdown.yaml"
shell:
"""
cp report/RMarkdown/Hybrid_assembly_report.Rmd {config[outdir]}Hybrid_assembly_report.Rmd
Rscript -e \"tinytex::install_tinytex()\"
Rscript -e \"rmarkdown::render('{config[outdir]}Hybrid_assembly_report.Rmd')\"
rm -f {config[outdir]}Hybrid_assembly_report.Rmd {config[outdir]}Hybrid_assembly_report.tex
"""
Conda YAML:
name: r-rmarkdown
channels:
- conda-forge
- bioconda
dependencies:
- r-base=4.0.3
- r-rmarkdown=2.5
- r-tinytex=0.27
Thanks in advance.
I think I've solved the issue. Instead of calling Rscript -e, I have put the following if-statement in the setup chunk in R Markdown (Which runs before running any other code if i'm correct). I then proceeded to uninstall TinyTeX to see whether it will install for once only which it did.
knitr::opts_chunk$set(echo = TRUE)
library(knitr)
if (!tinytex:::is_tinytex()){
tinytex::install_tinytex()
}

Running two related commands in Subprocess Python

I am trying to start mjpg-streamer from a python script on the raspberry pi. The instructions for how to start it from the command line are here and consist of running
export LD_LIBRARY_PATH=. ./mjpg_streamer -o "output_http.so -w ./www"
-i "input_raspicam.so"
from /var/www/mjpg-streamer/mjpg-streamer-experimental. When I do it in the terminal, it works fine.
However, I am trying to run it using subprocess.call like this:
subprocess.call('export LD_LIBRARY_PATH=.', shell=True, cwd='/var/www/mjpg-streamer/mjpg-streamer-experimental')
subprocess.call('./mjpg_streamer -o "output_http.so -w ./www" -i "input_raspicam.so -x 640 -y 480 -fps 15 -vf -hf"', shell=True, cwd='/var/www/mjpg-streamer/mjpg-streamer-experimental')
And that is giving me the error:
MJPG Streamer Version: svn rev: ERROR: could not find input plugin
Perhaps you want to adjust the search path with:
# export LD_LIBRARY_PATH=/path/to/plugin/folder
dlopen: input_raspicam.so: cannot open shared object file: No such file or directory
I'm guessing it is because the first command doesn't provide the relevant link to the plugin? I'm not entirely sure of how these commands work anyway, so any insight into that would also be helpful!
I have also tried using os.system to run these commands and have received the same error.
I'm sure I'm doing something silly, so thanks in advance for your patience!

Weka: Array Index out of bounds exception with CSV files

I get an unfriendly ArrayOutOfBoundsException when I try to input a CSV file to Weka. But it works fine when I use the same in the GUI.
pvadrevu#MacPro~$ java -Xmx2048m -cp weka.jar weka.classifiers.functions.Logistic -R 1.0E-8 -M -1 -t "some.csv" -d temp.model
Refreshing GOE props...
[KnowledgeFlow] Loading properties and plugins...
[KnowledgeFlow] Initializing KF...
java.lang.ArrayIndexOutOfBoundsException: 1
weka.classifiers.evaluation.Evaluation.setPriors(Evaluation.java:3843)
weka.classifiers.evaluation.Evaluation.evaluateModel(Evaluation.java:1503)
weka.classifiers.Evaluation.evaluateModel(Evaluation.java:650)
weka.classifiers.AbstractClassifier.runClassifier(AbstractClassifier.java:359)
weka.classifiers.functions.Logistic.main(Logistic.java:1134)
at weka.classifiers.evaluation.Evaluation.setPriors(Evaluation.java:3843)
at weka.classifiers.evaluation.Evaluation.evaluateModel(Evaluation.java:1503)
at weka.classifiers.Evaluation.evaluateModel(Evaluation.java:650)
at weka.classifiers.AbstractClassifier.runClassifier(AbstractClassifier.java:359)
at weka.classifiers.functions.Logistic.main(Logistic.java:1134)
Turns out that the new versions of Weka donot handle CSV files through command line. There are two options:
Revert back to an older version of Weka. 3.6.11 works fine for me while 3.7.11 does not.
Convert the CSV files to ARFF. It can be done using the Weka GUI.

Inspectdb in Oracle-Django gets OCI-22061: invalid format text [T

I'm using Oracle Database 10g xe universal Rel.10.2.0.1.0 against cx_Oracle-5.0.4-10g-unicode-py26-1.x86_64 on a django project on Ubuntu 10.04
My db is generated by Oracle 10gr2 enterprise edition (on Windows XP, import done in US7ASCII character set and AL16UTF16 NCHAR character set, import server uses AL32UTF8 character set, export client uses EL8MSWIN1253 character set)
When I try django-admin.py inspectdb I get the following error:
......."indexes = connection.introspection.get_indexes(cursor,
table_name) File
"/usr/lib/pymodules/python2.6/django/db/backends/oracle/introspection.py",
line 116, in get_indexes
for row in cursor.fetchall(): File "/usr/lib/pymodules/python2.6/django/db/backends/oracle/base.py", line
483, in fetchall
for r in self.cursor.fetchall()]) cx_Oracle.DatabaseError: OCI-22061: invalid format text [T".
I am aware of "inspectdb works with PostgreSQL, MySQL and SQLite" but as I understand from other posts it also works with Oracle somehow.
Does anyone know why I get this error or how I could fix it?
can you try it with updating cx_Oracle 5.1.1 package, then try this:
python manage.py inspectdb --database dbname
You can download cx_Oracle-5.1.2 and fix the issue using below command.
$ wget -c http://prdownloads.sourceforge.net/cx-oracle/cx_Oracle-5.1.2-11g-py27-1.x86_64.rpm
Command to install rpm
$ sudo yum install cx_Oracle-5.0.4-11g-unicode-py27-1.x86_64.rpm
Also download Oracle instantclient http://download.oracle.com/otn/linux/instantclient/11101/basic-11.1.0.6.0-linux-x86_64.zip and http://download.oracle.com/otn/linux/instantclient/11101/sdk-11.1.0.6.0-linux-x86_64.zip
Extract the above downloaded zip file.
copy the include folder from sdk-11.1.0.6.0-linux-x86_64 and paste in basic-11.1.0.6.0-linux-x86_64
Set the below path in .bashrc file
export $LD_LIBRARY_PATH = $LD_LIBRARY_PATH:/oracle_lib/oracle_instantclient_11_1
export $ORACLE_HOME = /oracle_lib/oracle_instantclient_11_1
$ ls /oracle_lib/oracle_instantclient_11_1
You should find the include folder with list of files
Then execute .bashrc file using $ source ~/.bashrc
I have tested it.