how I can evaluation the depth map result? - evaluation

Code section 1
!mkdir -p inputs
!mkdir -p outputs_midas
!mkdir -p outputs_leres
Code section 2
Clone git repo
!git clone https://github.com/compphoto/BoostingMonocularDepth.git
!wget https://sfu.ca/~yagiz/CVPR21/latest_net_G.pth
#!gdown https://drive.google.com/u/0/uc?id=1cU2y-kMbt0Sf00Ns4CN2oO9qPJ8BensP&export=download
# Downloading merge model weights
!mkdir -p /content/BoostingMonocularDepth/pix2pix/checkpoints/mergemodel/
!mv latest_net_G.pth /content/BoostingMonocularDepth/pix2pix/checkpoints/mergemodel/
# Downloading Midas weights
!wget https://github.com/AlexeyAB/MiDaS/releases/download/midas_dpt/midas_v21-f6b98070.pt
!mv midas_v21-f6b98070.pt /content/BoostingMonocularDepth/midas/model.pt
# # Downloading LeRes weights
!wget https://cloudstor.aarnet.edu.au/plus/s/lTIJF4vrvHCAI31/download
!mv download /content/BoostingMonocularDepth/res101.pth
Code section 3
%cd BoostingMonocularDepth/
Running the method using LeRes
!python run.py --Final --data_dir /content/inputs --output_dir /content/outputs_leres/ --depthNet 2
I applies this code to ceate depth map and this is my result I need way evalute the result
ther is part in github to evalution but I dont know how can I running this part
./evaluation/evaluatedataset.m
code link:
!git clone https://github.com/compphoto/BoostingMonocularDepth.git
and this is the image and the rusult
enter image description here
enter image description here

Related

Argo giving x509: cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs error

I've installed Argo on a managed k8 service following the guidelines here.
When i launch the following example task i get an error (if you have argo installed you should be able to copy paster the below code):
# create a.yml
cat >> a.yml<<EOL
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-world- # Name of this Workflow
spec:
entrypoint: whalesay # Defines "whalesay" as the "main" template
templates:
- name: whalesay # Defining the "whalesay" template
container:
image: docker/whalesay
command: [cowsay]
args: ["hello world"] # This template runs "cowsay" in the "whalesay" image with arguments "hello world"
EOL
# submit a.yml
argo --insecure-skip-tls-verify --insecure-skip-verify -n argo submit a.yml
# monitor
$ argo list
# NAME STATUS AGE DURATION PRIORITY
# hello-world-hxrcp Succeeded 4m 10s 0
argo watch --insecure-skip-tls-verify --insecure-skip-verify -v -n argo hello-world-hxrcp
# DEBU[2021-06-09T19:37:22.125Z] CLI version version="{v3.0.7 2021-05-25T18:57:09Z e79e7ccda747fa4487bf889142c744457c26e9f7 v3.0.7 clean go1.16.3 gc linux/amd64}"
# DEBU[2021-06-09T19:37:22.125Z] Client options opts="(argoServerOpts=(url=127.0.0.1:2746,path=,secure=true,insecureSkipVerify=true,http=true),instanceID=)"
# DEBU[2021-06-09T19:37:22.125Z] curl -H 'Accept: text/event-stream' -H 'Authorization: ******' 'https://127.0.0.1:2746/api/v1/workflow-events/argo?listOptions.fieldSelector=metadata.name%3Dhello-world-hxrcp&listOptions.resourceVersion=0'
# FATA[2021-06-09T19:37:22.536Z] Get "https://127.0.0.1:2746/api/v1/workflow-events/argo?listOptions.fieldSelector=metadata.name%3Dhello-world-hxrcp&listOptions.resourceVersion=0": x509: cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs
Why am i seeing this error ?
The install process was this:
kubectl create namespace argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/stable/manifests/install.yaml
CLI (taken from the latest version here):
# Download the binary
curl -sLO https://github.com/argoproj/argo/releases/download/v3.0.7/argo-linux-amd64.gz
# Unzip
gunzip argo-linux-amd64.gz
# Make binary executable
chmod +x argo-linux-amd64
# Move binary to path
sudo mv ./argo-linux-amd64 /usr/local/bin/argo
# Test installation
argo version
# link with server
# recommended on user panel in interface
cat >> ~/.bashrc <<EOL
export ARGO_SERVER='127.0.0.1:2746'
export ARGO_HTTP1=true
export ARGO_SECURE=true
export ARGO_BASE_HREF=
export ARGO_TOKEN=''
export ARGO_NAMESPACE=argo
export ARGO_INSECURE_SKIP_VERIFY=true
EOL
# check it works:
argo list
Heyo, I ran into this issue when setting up with the argo helm chart on kind. The problem is that you have to disable tls verification for the executor (the thing that executes the workflow) using the ARGO_KUBELET_INSECURE env var. Here are the docs https://argoproj.github.io/argo-workflows/environment-variables/#executor
Sorry I don't have the exact code change you need for your setup, but I'm sure you can figure that out now that you know what the problem is ;).
Here's what my helm values.yaml file looks like in case that helps anyone else:
server:
serviceType: LoadBalancer
extraArgs:
- --auth-mode=server
controller:
containerRuntimeExecutor: k8sapi
executor:
env:
- name: ARGO_KUBELET_INSECURE
value: true

Serverspec test fail when i run its pipeline from other pipeline

I'm running 3 pipelines in jenkins (CI, CD, CDP) when I run the CI pipe the final stage is a trigger for activate the pipe CD (Continuous Deployment), this receive a parameter APP_VERSION from CI (Continuous Integration) PIPE and deploy an instance with packer and run SERVERSPEC TEST, but serverspec test failed.
but the demo-app is installed via salstack
The strange is when I run the CD and pass the parameter APP_VERSION manually this WORK !!
this is the final stage for pipeline CI
stage "Trigger downstream"
echo 'parametro'
def versionApp = sh returnStdout: true, script:"echo \$(git rev-parse --short HEAD) "
build job: "demo-pipeCD", parameters: [[$class: "StringParameterValue", name: "APP_VERSION", value: "${versionApp}"]], wait: false
}
I have passed to serverspec the sbin PATH and not work.
EDIT: I add the code the test.
enter code here
require 'spec_helper'
versionFile = open('/tmp/APP_VERSION')
appVersion = versionFile.read.chomp
describe package("demo-app-#{appVersion}") do
it { should be_installed }
end
Also, i add the job pipeline
#!groovy
node {
step([$class: 'WsCleanup'])
stage "Checkout Git repo"
checkout scm
stage "Checkout additional repos"
dir("pipeCD") {
git "https://git-codecommit.us-east-
1.amazonaws.com/v1/repos/pipeCD"
}
stage "Run Packer"
sh "echo $APP_VERSION"
sh "\$(export PATH=/usr/bin:/root/bin:/usr/local/bin:/sbin)"
sh "/opt/packer validate -var=\"appVersion=$APP_VERSION\" -var-
file=packer/demo-app_vars.json packer/demo-app.json"
sh "/opt/packer build -machine-readable -
var=\"appVersion=$APP_VERSION\" -var-file=packer/demo-app_vars.json
packer/demo-app.json | tee packer/packer.log"
REPEAT .. the parameter APP_VERSION in job pipe is rigth and the demo-app app is installed before the execute the test.

name input/output files in snakemake according to variable (not wildcard) in config.yaml

I am trying to edit and run a snakemake pipeline. In a nutshell, the snakemake pipeline calls a default genome aligner (minimap) and produces output files with this name. I am trying to add a variable aligner to config.yaml to specify the aligner I want to call. Also (where I am actually stuck), the output files should have the name of the aligner specified in config.yaml.
My config.yaml looks like this:
# this config.yaml is passed to Snakefile in pipeline-structural-variation subfolder.
# Snakemake is run from this pipeline-structural-variation folder; it is necessary to
# pass an appropriate path to the input-files (the ../ prefix is sufficient for this demo)
aligner: "ngmlr" # THIS IS THE VARIABLE I AM ADDING TO THIS FILE. VALUES COULD BE minimap or ngmlr
# FASTQ file or folder containing FASTQ files
# check if this has to be gzipped
input_fastq: "/nexusb/Gridion/20190917PGD2staal2/PD170815/PD170815_cat_all.fastq.gz" # original is ../RawData/GM24385_nf7_chr20_af.fastq.gz
# FASTA file containing the reference genome
# note that the original reference sequence contains only the sequence of chr20
reference_fasta: "/nexus/bhinckel/19/ONT_projects/PGD_breakpoint/ref_hg19_local/hg19_chr1-y.fasta" # original is ../ReferenceData/human_g1k_v37_chr20_50M.fasta
# Minimum SV length
min_sv_length: 300000 # original value was 40
# Maximum SV length
max_sv_length: 1000000 # original value was 1000000. Note that the value I used to run the pipeline for the sample PD170677 was 100000000000, which will be coerced to NA in the R script (/home/bhinckel/ont_tutorial_sv/ont_tutorial_sv.R)
# Min read length. Shorter reads will be discarded
min_read_length: 1000
# Min mapping quality. Reads will lower mapping quality will be discarded
min_read_mapping_quality: 20
# Minimum read support required to call a SV (auto for auto-detect)
min_read_support: 'auto'
# Sample name
sample_name: "PD170815" # original value was GM24385.nf7.chr20_af. Note that this can be a list
I am posting below the sections of my snakefile which generate output files with the extension _minimap2.bam, which I would like to replace by either _minimap2.bam or _ngmlr.bam, depending on aligner on config.yaml
# INPUT BAM folder
bam = None
if "bam" in config:
bam = os.path.join(CONFDIR, config["bam"])
# INPUT FASTQ folder
FQ_INPUT_DIRECTORY = []
if not bam:
if not "input_fastq" in config:
print("\"input_fastq\" not specified in config file. Exiting...")
FQ_INPUT_DIRECTORY = os.path.join(CONFDIR, config["input_fastq"])
if not os.path.exists(FQ_INPUT_DIRECTORY):
print("Could not find {}".format(FQ_INPUT_DIRECTORY))
MAPPED_BAM = "{sample}/alignment/{sample}_minimap2.bam" # Original
#MAPPED_BAM = "{sample}/alignment/{sample}_{alignerName}.bam" # this did not work
#MAPPED_BAM = f"{sample}/alignment/{sample}_{config['aligner']}.bam" # this did nor work either
else:
MAPPED_BAM = find_file_in_folder(bam, "*.bam", single=True)
...
if config['aligner'] == 'minimap':
rule index_minimap2:
input:
REF = FA_REF
output:
"{sample}/index/minimap2.idx"
threads: config['threads']
conda: "env.yml"
shell:
"minimap2 -t {threads} -ax map-ont --MD -Y {input.REF} -d {output}"
rule map_minimap2:
input:
FQ = FQ_INPUT_DIRECTORY,
IDX = rules.index_minimap2.output,
SETUP = "init"
output:
BAM = "{sample}/alignment/{sample}_minimap2.bam",
BAI = "{sample}/alignment/{sample}_minimap2.bam.bai"
conda: "env.yml"
threads: config["threads"]
shell:
"cat_fastq {input.FQ} | minimap2 -t {threads} -K 500M -ax map-ont --MD -Y {input.IDX} - | samtools sort -# {threads} -O BAM -o {output.BAM} - && samtools index -# {threads} {output.BAM}"
else:
print(f"Aligner is {config['aligner']} - skipping indexing step for minimap2")
rule map_ngmlr:
input:
REF = FA_REF,
FQ = FQ_INPUT_DIRECTORY,
SETUP = "init"
output:
BAM = "{sample}/alignment/{sample}_minimap2.bam",
BAI = "{sample}/alignment/{sample}_minimap2.bam.bai"
conda: "env.yml"
threads: config["threads"]
shell:
"cat_fastq {input.FQ} | ngmlr -r {input.REF} -t {threads} -x ont - | samtools sort -# {threads} -O BAM -o {output.BAM} - && samtools index -# {threads} {output.BAM}"
I initially tried to create a alignerName parameter, similar to the sample parameter, as shown below:
# Parameter: sample_name
sample = "sv_sample01"
if "sample_name" in config:
sample = config['sample_name']
###############
#
# code below created by me
#
###############
# Parameter: aligner_name
alignerName = "defaultAligner"
if "aligner" in config:
alignerName = config['aligner']
Then I tried to input {alignerName} wherever I have minimap2 on my input/ output files (see commented MAPPED_BAM variable definition above), though this is throwing an error. I guess snakemake will interpret {alignerName} as a wildcard, though what I want is simply to pass the variable name defined in config['aligner'] to input/ output files. I also tried with f-string (MAPPED_BAM = f"{sample}/alignment/{sample}_{config['aligner']}.bam"), though I guess this it did not work either.
You are close!
The way wildcards work in snakemake is they get interpreted 'last', while f-strings get interpreted first. To not interpret a curly brace in an f-string you can escape it with another curly brace, like so:
print(f"{{keep curly}}")
>>> {keep curly}
So all we need to do is
MAPPED_BAM = f"{{sample}}/alignment/{{sample}}_{config['aligner']}.bam"

Converting GeoLite2 data for use with xtables geoip

My apologies if this has been covered here or elsewhere. I read the postings back to 2016.
My debian system stopped updating the xtables geoip database. On investigation it developed that this is because Maxmind dropped support for legacy GeoIP databases. I have got as far as installing and configuring Maxmind's geoipupdate program for the GeoLite2 database and scheduling it weekly in crontab.
At this point I am stumped. geoipupdate returns a .mmdb database. This is not usable by the debian-supplied scripts which convert .CSV files to the country code files in /usr/share/xt_geoip/LE and /usr/share/xt_geoip/BE.
The debian package xtables-addons has not been updated to deal with this situation.
Assistance or a pointer to a solution will be gratefully received. At present I am still using the last valid database which is now getting to be over six months old.
I eventually ended up writing this script, which now runs weekly. So far (three months on) it appears to be satisfactory.
cat update-geoip.sh
#!/bin/bash -e
GEOLITE_URL="https://geolite.maxmind.com/download/geoip/database/GeoLite2-Country-CSV.zip"
GEOLITE_ZIP="GeoLite2-Country-CSV.zip"
COUNTRY_URL="http://download.geonames.org/export/dump/countryInfo.txt"
#
# Switch to the GeoIP directory if not already there
#
echo "--> cd /usr/share/xt_geoip"
cd /usr/share/xt_geoip
#
# Remove anything remaining from previous failed runs
#
# Note: DO NOT delete the existing BE and LE subfolders at this
# time. If the download fails the result would be no
# database at all.
#
echo "--> rm -r GeoLite2*"
rm -r -f GeoLite2*
echo "--> rm countryInfo.txt"
rm -f countryInfo.txt
echo "--> rm GeoIP-legacy.csv"
rm -f GeoIP-legacy.csv
#
# Get the GeoIP ZIP file
#
echo "--> wget --no-check-certificate $GEOLITE_URL"
wget --no-check-certificate $GEOLITE_URL
#
# See if the ZIP file now exists
#
if [ ! -e $GEOLITE_ZIP ]; then
echo "--> GeoIP ZIP file did not download"
echo "--> Send email to root and stop here"
/usr/sbin/sendmail root << EOM
From: Update_GeoIP
To: root
Subject: GeoIP update failed
GeoIP update failed.
Unable to download GeoIP ZIP file
$GEOLITE_ZIP
EOM
exit
fi
#
# Unzip the ZIP file
#
echo "--> unzip $GEOLITE_ZIP"
unzip $GEOLITE_ZIP
#
# Delete the ZIP file
#
#echo "--> rm $GEOLITE_ZIP"
rm $GEOLITE_ZIP
#
# Move the received data directory to a standard name
#
echo "--> mv GeoLite2-Country-CSV_* GeoLite2"
mv GeoLite2-Country-CSV_* GeoLite2
#
# See if the critical GeoIP data files now exist
#
if [ ! -e "GeoLite2/GeoLite2-Country-Blocks-IPv4.csv" ] ||
[ ! -e "GeoLite2/GeoLite2-Country-Blocks-IPv6.csv" ]; then
echo "--> GeoIP data files are missing"
echo "--> Send email to root and stop here"
/usr/sbin/sendmail root << EOM
From: Update_GeoIP
To: root
Subject: GeoIP update failed
GeoIP update failed.
GeoIP data file(s) are missing
GeoLite2/GeoLite2-Country-Blocks-IPv4.csv
GeoLite2/GeoLite2-Country-Blocks-IPv6.csv
EOM
exit
fi
#
# Get the country info data file
#
echo "--> wget --no-check-certificate $COUNTRY_URL"
wget --no-check-certificate $COUNTRY_URL
#
# See if the country info data file now exists
#
if [ ! -e "countryInfo.txt" ]; then
echo "--> Country info data file did not download"
echo "--> Send email to root and stop here"
/usr/sbin/sendmail root << EOM
From: Update_GeoIP
To: root
Subject: GeoIP update failed
GeoIP update failed.
Unable to download country info data file
$COUNTRY_URL
EOM
exit
fi
#
# Build an old format data file from the new format data files
#
echo "--> cat ./GeoLite2/GeoLite2-Country-Blocks-IPv{4,6}.csv | ./convert_GeoLite2.pl ./countryInfo.txt > /usr/share/xt_geoip/GeoIP-legacy.csv"
cat ./GeoLite2/GeoLite2-Country-Blocks-IPv{4,6}.csv | ./convert_GeoLite2.pl ./countryInfo.txt > /usr/share/xt_geoip/GeoIP-legacy.csv
#
# Delete the downloaded data files
#
echo "--> rm -r GeoLite2"
rm -r GeoLite2
echo "--> rm countryInfo.txt"
rm country_Info.txt
#
# Preserve the old BE and LE directories just in case
#
echo "--> rm -r -f LastBE LastLE"
rm -r -f LastBE LastLE
echo "--> mv BE LastBE"
mv BE LastBE
echo "--> mv LE LastLE"
mv LE LastLE
#
# Convert the generated database to the xtables GeoIP format
#
echo "--> /usr/lib/xtables-addons/xt_geoip_build -D /usr/share/xt_geoip ./GeoIP-legacy.csv"
/usr/lib/xtables-addons/xt_geoip_build -D /usr/share/xt_geoip ./GeoIP-legacy.csv
#
# Delete the remaining data files
#
echo "--> rm countryInfo.txt"
rm countryInfo.txt
echo "--> rm GeoIP-legacy.csv"
rm GeoIP-legacy.csv
#
# Notify root that the update succeeded
#
echo "--> Send notification email to root"
/usr/sbin/sendmail root << EOM
From: Update_GeoIP
To: root
Subject: Weekly update of xtables GeoIP completed
Weekly update of xtables GeoIP database successful.
EOM
echo "xtables GeoIP database update completed"
You can also download the source from the xtable-addon's project (either directly or from the sid version of the xtables-addons-common package) and grab updated versions of the scripts.
https://sourceforge.net/projects/xtables-addons/files/Xtables-addons/
See the following askubuntu answer:
https://askubuntu.com/questions/1117669/xtables-addons-issues-with-maxmind-geolite2
Have a look at GeoLite2xtables :-
https://github.com/mschmitt/GeoLite2xtables
You can download a zip (or git clone).
It has example workflow (shell commands) for legacy GeoLite CSV (which is probably what you have which stopped working in early Jan 2019) and GeoLite2 CSV (which you can use instead).

How to execute the ffmpeg thumbnail extraction command using sub-process in django?

The following code we are using to extract the thumbnail images from video
ffmpeg -i low.mkv -vf thumbnail=10,setpts=N/TB -r 1 -vframes 10 inputframes%03d.png
This code is working absolutely fine on terminal, but it is giving an error when we are processing the same with subprocess in django.
Our aim is to generate 10 Thumbnails from any length of the video
Here is the code
vaild_fps = "'thumbnail=10,setpts=N/TB -r 1 -vframes 10'"
subprocess.call([settings.FFMPEG_PATH,
'-i',
input_file_path,
'-vf',
vaild_fps,
thumbnail_output_file_path,
]
)
Error No such filter: 'thumbnail=10,setpts=N/TB -r 1 -vframes 10'
Error opening filters!
Unfortunately i'am able to crack-it. Here is the solution
subprocess.call([settings.FFMPEG_PATH,
'-i',
input_file_path,
'-vf',
'thumbnail=10,setpts=N/TB',
'-r',
'1',
'-vframes',
'10',
thumbnail_output_file_path,
]
)