Inkscape missing pixels when upscaling svg - inkscape

I'm using inkscape 0.92.3 on Ubuntu 18.04
I'm using the command line to produce a jpg with a 4000 pixels width:
inkscape -z -f in.svg -j -w 4000 -e out.jpg
My problem is that when I zoom on the output, I can see that some pixels are "missing" (black color):
Why is that and is there a way to prevent it?

These missing pixels represent the background. Black is the default color.
To change the background color, use the -b option
inkscape -z -f in.svg -b white -w 4000 -e out.jpg

Related

To encode image using jbig2enc

I am trying to encode image using JBIG2 encoder that I have installed using Macports.
https://ports.macports.org/port/jbig2enc/
I have also installed leptonica from Macports:
https://ports.macports.org/port/leptonica/
The system seems to have installed it:
% jbig2 -V --version
jbig2enc 0.28
Also, from jbig2 --help I am getting this
% jbig2 --help
Usage: jbig2 [options] <input filenames...>
Options:
-b <basename>: output file root name when using symbol coding
-d --duplicate-line-removal: use TPGD in generic region coder
-p --pdf: produce PDF ready data
-s --symbol-mode: use text region, not generic coder
-t <threshold>: set classification threshold for symbol coder (def: 0.85)
-T <bw threshold>: set 1 bpp threshold (def: 188)
-r --refine: use refinement (requires -s: lossless)
-O <outfile>: dump thresholded image as PNG
-2: upsample 2x before thresholding
-4: upsample 4x before thresholding
-S: remove images from mixed input and save separately
-j --jpeg-output: write images from mixed input as JPEG
-a --auto-thresh: use automatic thresholding in symbol encoder
--no-hash: disables use of hash function for automatic thresholding
-V --version: version info
-v: be verbose
As the encoder refers to https://github.com/agl/jbig2enc for encoding the images I tried the command they have mentioned for encoding:
$ jbig2 -s feyn.tif >feyn.jb2
I ran it for an image original.jpg, This is what I am getting:
> jbig2 -s original.jpg >original.jb2
[1] 43894
zsh: command not found: gt
zsh: command not found: original.jb2
sahilsharma#Sahils-Air ~ % JBIG2 compression complete. pages:1 symbols:5 log2:3
?JB2
?|?n6?Q?6?(m?զu? Y???_?&??1???<?CJ?????#Rᮛ?O?V??:?,??i4?A?????5?;ސA??-!????5Ѧ??/=n܄?*?#|J6#?J?6?N1?n??v?"E}?.~?+????ڜ?]HO_b??~?[??????S2p𩗩????fC?????X?Z?????X=?m?????
??jN?????i????S?,j6???Br?V??F???8?w?#?6? uK?V??R?s~F-?F%?j????]j???0?!GG"'?!??)2v??K???h-???1
[1] + done jbig2 -s original.jpg
According to '--help', '-s' will do the lossless encoding.
The execution shows JBIG2 compression completed but no jb2 files have been formed.
Please help me in getting to know if the compression has taken place? Then where can I get the encoded image?
I am running this encoder to get to know the compression ratio. So I just want to know the encoded image size.
Use >, not >. The result will then be in feyn.jb2.

Raw output file got damaged

I’m working on oneVPL samples from this GitHub repository (https://github.com/oneapi-src/oneAPI-samples ) and I’m trying to build hello-vpp sample. After running the program with the command in readme.md file, I wanted to increase the video size to 1280x720. While playing the raw output file, I used the below command
fplay -video_size 1280x720 rawvideo out.raw
My raw output file got damaged. A buffered video got played. How do I change the width and height of the output file? Any suggestions here?
Add the scale filter. Example assuming video.raw is 640x360:
ffplay -f rawvideo -video_size 640x360 -pixel_format rgba -vf scale=1280:720 video.raw
Try the below command:
ffplay -video_size 1280x720 -pixel_format bgra -f rawvideo out.raw

Use gdb to debug MPI in multiple screen windows

If I have an MPI program that I want to debug with gdb while being able to see all of the separate processes' outputs, I can use:
mpirun -n <NP> xterm -hold -e gdb -ex run --args ./program [arg1] [arg2] [...]
which is well and good when I have a GUI to play with. But that is not always the case.
Is there a similar set up I can use with screen such that each process gets its own window? This would be useful for debugging in a remote environment since it would allow me to flip between outputs using Ctrl+a n.
I think this answer in the "How do I debug an MPI program?" thread does what you want.
EDITS:
In response to the comment, you can do it somewhat more easily, although succinct isnt exactly the term I would use:
Launch a detached screen via mpirun - running your debugger and process. I've called the session mpi, and im passing through my library path because it gets stripped by screen and my demo needs it (also I'm on a mac, hence lldb and DYLD):
mpirun -np 4 screen -AdmS mpi env DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH lldb demo.out
Then launch a seperate screen session, which i've called 'debug':
screen -AdmS debug
Use screen -ls to list the running sessions:
>> screen -ls
There are screens on:
19871.mpi (Detached)
19872.mpi (Detached)
19875.mpi (Detached)
19876.mpi (Detached)
20105.debug (Detached)
Now launch 4 new tabs in the debug session, attaching each to one of the mpi sessions:
screen -S debug -X screen -t tab0 screen -r 19871.mpi
screen -S debug -X screen -t tab1 screen -r 19872.mpi
screen -S debug -X screen -t tab2 screen -r 19875.mpi
screen -S debug -X screen -t tab3 screen -r 19876.mpi
Then simply attach to your debug session with screen -r debug. Now you have 4 tabs, each running a serial instance of the debugger attached to an mpi process similarly to the xterm method you described before. Its not exactly the quickest set of commands, but at least you dont need to modify your code or chase PIDs etc.
Another method I tried, but doesnt seem to work:
Launch a detached screen
screen -AdmS ashell
Launch two mpi processes that start new screen tabs in the detached session, launching lldb with my demo mpi application:
mpirun -np 1 screen -S ashell -X screen -t tab1 env DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH lldb demo.out : -np 1 screen -S ashell -X screen -t tab2 env DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH lldb demo.out
Or alternatively just
mpirun -np 2 screen -S ashell -X screen env DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH lldb demo.out
Then attach to screen with
screen -r ashell
And you'll have 3 tabs, 2 of them running lldb with your program, and one with whatever your standard shell is. Unfortunately when you try running the programs, each process thinks its the only one in the comm world, and im not sure what to do about that...
How do you debug a C/C++ MPI program?
One way is to start a separate terminal and gdb session for each of the
processes:
mpirun -n <NP> xterm -hold -e gdb -ex run --args ./program [arg1] [arg2] [...]
where NP is the number of processes.
What if you don't have a GUI handy?
(See below for a handy script.)
This is based on timofiend's answer here.
Spin up the mpi program in its debugger in a number of screen sessions:
mpirun -np 4 screen -AdmS mpi gdb ./parallel_pit_fill.exe one retain ./beauford.tif 500 500
Spin up a new screen session to access the debugger:
screen -AdmS debug
Load the debugger's screen sessions in to the new screen session
screen -list | #Get list of screen sessions
grep -E "[0-9]+.mpi" | #Extract the relevant ones
awk '{print NR-1,$1}' | #Generate tab #s and session ids, drop rest of the string
xargs -n 2 sh -c '
screen -S debug -X screen -t tab$0 screen -r $1
'
Jump into the new screen session:
screen -r debug
I've encapsulated the above in a handy script:
#!/bin/bash
if [ $# -lt 2 ]
then
echo "Parallel Debugger Syntax: $0 <NP> <PROGRAM> [arg1] [arg2] [...]"
exit 1
fi
the_time=`date +%s` #Use this so we can run multiple debugging sessions at once
#(assumes we are only starting one per second)
#The first argument is the number of processes. Everything else is what we want
#to run. Make a new mpi screen for each process.
mpirun -np $1 screen -AdmS ${the_time}.mpi gdb "${#:2}"
#Create a new screen for debugging from
screen -AdmS ${the_time}.debug
#The following are used for loading the debuggers into the debugging screen
firstpart="screen -S ${the_time}.debug"
secondpart=' -X screen -t tab$0 screen -r $1'
screen -list | #Get list of mpi screens
grep -E "[0-9]+.${the_time}.mpi" | #Extract the relevant ones
awk '{print NR-1,$1}' | #Generate tab #s and session ids, drop rest of the string
xargs -n 2 sh -c "$firstpart$secondpart"
screen -r ${the_time}.debug #Enter debugging screen
You can have a look at tmpi, which automates what the other answers show how to achieve, but using tmux instead of screen.
And as a bonus, it multiplexes your keyboard input to all MPI ranks!

New versions of srep failing my compression algorithm

I am currently using freearc, precomp042 and srep 3.2
I use 4 regular compressions styles as noted below depending on data type and have no problem. I recently tried using srep 3.9, 3.91 and 3.92. All compress OK but fail immediately on decompression with an srep l256 error
What can I change to allow the new version to work and is it possible to improve compression. I have 8gb RAM and compression time is not a problem for me
Many thanks
arc a -ep1 -ed -r -w Archive(A1).bin -mx -mc-delta -mc:lzma/lzma:192mb:normal:bt4:192:mc10000:lc8 -ld192m -mc:rep/srep:l256 -mc$default,$obj:+precomp042:c-:t-j:intense -s -x Archive\*.*
arc a -ep1 -ed -r -w Archive(A2).bin -mx -mc:lzma/lzma:max:512mb -mc:exe/exe2 -mc:rep/maxsrep -mc$default,$obj:+maxprecompj -x Archive\*.*
arc a -ep1 -ed -r -w Archive(A3).arc -msrep+lzma:a1:mfbt4:d256m:fb128:mc1000:lc8 -x Archive\*.*
arc a -ep1 -ed -r -w Archive(A4).bin -mprecomp:zl69:d0:t-jnf+srep+lzma:a1:mfbt4:d256m:fb128:mc1000:lc8 -x Archive\*.*
Here mate, try this. Its the most powerful compression without messing around many external compressors.
-mprecomp+srep:m3f:a1:l256+lzma:a1:mfbt4:d200m:fb128:mc1000:lc8

Screen capture program in C or C++ and Linux [duplicate]

This question already has answers here:
Fastest method for screen capturing on Linux
(8 answers)
Closed 4 years ago.
I am looking for a program to capture screen in Linux using C or Cpp. can someone help with giving a skeleton structure or program what can help me.
Thanks and Regards.
How to capture screen with ffmpeg:
Use the x11grab device:
ffmpeg -f x11grab -r 25 -s 1024x768 -i :0.0+100,200 output.flv
This will grab the image from desktop, starting with the upper-left
corner at (x=100, y=200) with the width and height of 1024x768.
If you need audio too, you can use alsa like this:
ffmpeg -f x11grab -r 25 -s 1024x768 -i :0.0+100,200 -f alsa -ac 2 -i
pulse output.flv
So you can simply place this in capture.sh and run it from your code:
#include <cstdlib>
int main(){ std::system("./capture.sh"); }
If you have to do it without calling external utilities, you can use libffmpeg directly.