vcpkg: qtdeclarative-everywhere-src-5.15.0.tar.xz "Transferred partial file" - c++

I try installing QT5 using vcpkg on Windows 10. Unfortunately, when executing ./vcpkg.exe install qt5:x64-windows, I get a long list of errors, must useful information being:
-- Downloading http://download.qt.io/official_releases/qt/5.15/5.15.0/submodules/qtdeclarative-everywhere-src-5.15.0.tar.xz... Failed. Status: 18;"Transferred a partial file"
Failed to download file.
If you use a proxy, please set the HTTPS_PROXY and HTTP_PROXY environment
variables to "https://user:password#your-proxy-ip-address:port/".
Otherwise, please submit an issue at https://github.com/Microsoft/vcpkg/issues
Error: Building package qt5-declarative:x64-windows failed with: BUILD_FAILED
Please ensure you're using the latest portfiles with `.\vcpkg update`, then
submit an issue at https://github.com/Microsoft/vcpkg/issues including:
Package: qt5-declarative:x64-windows
Vcpkg version: 2020.06.15-nohash
As I am using the current version and was successful downloading and installing opencv as well as eigen3, I don't think proxy is being an issue.
I was able to download the file itself and hoped, I could manually paste it at the required position in the code (is this possible?) or use a different mirror. I would be glad if someone could give me guydance, as I am new to vcpkg.
Thanks in advance
Edit: As suggested by #drescherjm, I pasted the file in the vcpkg\downloads folder. Now I am looking at the File does not have expected hash error. How do I solve that issue?

Related

PowerBI Visuals Tools - error after pbiviz start command

I trying to start my custom visual (like usually) but after I updated powerbi-custom-visual to version beta 3.0.11 from version 3.0.10 I got the following error:
error ENOENT: no such file or directory, open '/Users/mar/CustomVisuals/rangechart/.tmp/precompile/visualPlugin.ts'
(node:1454) UnhandledPromiseRejectionWarning: Error: Failed to generate visualPlugin.ts
at generateVisualPlugin.then.catch.ex (/usr/local/lib/node_modules/powerbi-visuals-tools/node_modules/powerbi-visuals-webpack-plugin/index.js:168:12)
at <anonymous>
Does anyone knows why is that? I returned back to the previous beta version of powerbi-custom-visual but it did not help. With version 2.3.0 everything works fine.
I ran into something similar after I deleted the .tmp folder in my project to clean up from an old build. I found that I had to manually (re)create the .tmp/precompile directories inside my project folder. Not sure why the tool couldn't handle creating them itself.

how to compile Ignite application on CMake?

I did compile Ignite Application successfully.
but The Binary didn't work.
/tmp/tmp.Nw0IPD6ru3/cmake-build-debug-local-container/planet_engine: error while loading shared libraries: libjvm.so: cannot open shared object file: No such file or directory
how can I make to it work?
Also, I compiled C++ Examples successfully. such as ignite-compute-example.
and, I execute that but I got an error message.
An error occurred: JVM library is not found (did you set JAVA_HOME environment variable?)
and I using a nightly release version 2.8.0.20190213 because I couldn't build to version 2.7 in my environment.
I posted environment values down.
IGNITE_HOME=
TERM=xterm-256color
SHELL=/bin/bash
LIBRARY_PATH=/root/jre1.8.0_201/lib/amd64/server:/root/jre1.8.0_201/lib/amd64/
LC_NUMERIC=ko_KR.UTF-8
SSH_TTY=/dev/pts/0
JRE_HOME=/root/jre1.8.0_201
USER=root
LS_COLORS=rs=0:d...
LD_LIBRARY_PATH=/root/jre1.8.0_201/lib/amd64/server:/root/jre1.8.0_201/lib/amd64/
CLASS_PATH=/root/jdk-11.0.2/lib:
LC_TELEPHONE=ko_KR.UTF-8
MAIL=/var/mail/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/jdk-11.0.2/bin
LC_IDENTIFICATION=ko_KR.UTF-8
JAVA_HOME=/root/jdk-11.0.2
LANG=en_US.UTF-8
LC_MEASUREMENT=ko_KR.UTF-8
JDK_HOME=/root/jdk-11.0.2/lib
SHLVL=1
HOME=/root
LOGNAME=root
LESSOPEN=| /usr/bin/lesspipe %s
LESSCLOSE=/usr/bin/lesspipe %s %s
LC_TIME=ko_KR.UTF-8
LC_NAME=ko_KR.UTF-8
_=/usr/bin/env
Thank you for reading. :)
I got it.
I am working on a docker container environment.
and therefore I am using remote build and debug with ssh and gdb.
finally, I found out why it couldn't find libjvm.so and why couldn't read environment values such as JAVA_HOME.
because it is working in gdb for now.
I confirmed that it is working when without gdb.
I will find a solution.
and, if I have been found, I will update the answer.
[Solved]
I share how I make solved that.
I was using an Oracle JDK-11 through source install.
but Ignite C++ client need something different with latest released jdk versions.
Ignite need a directory structure like this
JAVA_HOME/ (as JDK install directory)
- jre/
- lib/
- lib/
...
I solved by apt install openjdk-8-jdk.
openjdk-8-jdk have structure for what Ignite need.
i added JAVA_HOME, IGNITE_HOME, at /etc/environment.
It works finally.
but I got another problem. HAHA
I am so sad.
This also GDB problem..

Pabot - Unable to run parallel robotframework tests

So, I'm working on a robotframework test project, and the goal is to run several test suites in parallel. For this purpose, pabot was chosen as the solution. I am trying to implement it, but with little success.
My issue is: after installing Pabot (which, I might say, I did by cloning the project and running "setup.py install", instead of using pip, since the corporate proxy I'm behind has proven an obstacle I can't overcome), I created a new directory in the project tree, moved some suites there, and ran:
pabot --processes 2 --outputdir pabot_results Login*.robot
Doing so results in the following error message:
2018-10-10 10:27:30.449000 [PID:9676] [0] EXECUTING Suites.LoginAdmin
2018-10-10 10:27:30.449000 PID:400 EXECUTING Suites.LoginUser
2018-10-10 10:27:30.777000 PID:400 FAILED Suites.LoginUser
2018-10-10 10:27:30.777000 [PID:9676] [0] FAILED Suites.LoginAdmin
WARN: No output files in "pabot_results\pabot_results"
Output:
[ ERROR ] Reading XML source '' failed: invalid mode ('rb') or filename
Try --help for usage information.
Elapsed time: 0 minutes 0.578 seconds
Upon inspecting the stderr file that was generated, I have this message:
Traceback (most recent call last):
File "C:\Python27\Lib\site-packages\robotframework-3.1a2.dev1-py2.7.egg\robot\running\runner.py", line 22, in
from .context import EXECUTION_CONTEXTS
ValueError: Attempted relative import in non-package
Apparently, this has to do with something from the runner.py script, which, if I'm not mistaken, came with the installation of robotframework. Since manually modifying that script does not seem to me the optimal solution, my question is, what am I missing here? Did I forget to do anything while setting this up? Or is this an issue of compatibility between versions?
This project is using Maven as the tool to manage dependencies. The version I am running is 3.5.4. I am using a Windows 10, 64bit system; I have Python 2.7.14, and Robot Framework 3.1a2.dev1. The Pabot version is 0.44. Obviously, I added C:\Python27 and C:\Python27\Scripts to the PATH environment variable.
Edit: I am also using robotframework-maven-plugin version 1.4.0.8, if that happens to be relevant.
Edit 2: added the error messages in text format.
I believe I've come across an issue similar when setting up parallel execution on my machine. Firstly I would confirm that pabot is installed using pip show robotframework-pabot.
Then you should define the directory your results are going to using -d.
I then modified the name of the -o to Output.xml to make it easy to identify.
This is a copy of the code I use. Runs optimally with 8 processes
pabot --processes 8 -d results -o Output.xml Tests
Seems that you stumbled on a bug in the prerelease version of robot framework (3.1a2.dev1).
Please install a release version of robot framework. For example 3.0.4.
Just in case anyone happens to stumble upon this issue in the future:
Since I can't use pip, and I tried a good deal of workarounds that eventually made things more unstable, I ended up saving my project and removing everything Python-related from my system, so as to allow me to install everything from scratch. In a Windows 10, 64bit system, I used:
Python 2.7.14
wxPython 2.8.12.1, win64, unicode, for py27
setuptools 40.2.0 (to allow me to use the easy_install command)
Robot Framework 3.0.4
robotremoteserver 1.1
Selenium2Library 3.0.0
and Pabot version 0.45.
I might add that, when installing the Selenium2Library the way I described above, it eventually tries to download some things from the pip repositories - which, if you have a proxy, will cause you trouble. I solved this problem by browsing https://pypi.org/simple/selenium/, manually downloading the 2.53.6 .tar.gz file, then extracting it and running setup.py install on the command line.
PS: Ideally, though, anyone should be able to use proxy settings from the command line (--proxy http://user:password#server:port) to get pip and then use it; however, for some reason, probably related to network security configurations that I didn't want to lose time with, this didn't work in my case.

Unable to install Anaconda environment containing anaconda 4.0.0 np110py27_0

In Anaconda I am trying to create an environment using an environment.yml file which begins with the lines:
name: mytest
dependencies:
- anaconda=4.0.0=np110py27_0
However when trying to create the environment, I get the error:
Fetching package metadata .........
Solving package specifications: ....
Error: The following specifications were found to be in conflict:
- anaconda 4.0.0 np110py27_0
Use "conda info <package>" to see the dependencies for each package.
I encountered no problems when I did this seven days ago, but when I tried this yesterday I got the error.
I am running on Windows 7 64-bit as administrator, Anaconda 2.2.0 (Python 2.7 version). The "conda list" output includes conda 4.1.11 and conda-env 2.5.2.
To try to isolate the error, I installed Miniconda2 on a different 64-bit Windows 7 computer (as administrator) that had never had any Anaconda/Miniconda installed before. This is the most recent 64-bit Python 2.7 series (Miniconda2-4.1.11-Windows-x86_64.exe).
But trying to install anaconda=4.0.0=np110py27_0, either to a new environment or to the root environment, both produce the same error I received before:
C:\>conda install anaconda=4.0.0=np110py27_0
Fetching package metadata .........
.Solving package specifications: ....
The following specifications were found to be in conflict:
- anaconda 4.0.0 np110py27_0
Use "conda info <package>" to see the dependencies for each package.
C:\>conda create --name test400 anaconda=4.0.0=np110py27_0
Fetching package metadata .........
.Solving package specifications: ....
The following specifications were found to be in conflict:
- anaconda 4.0.0 np110py27_0
Use "conda info <package>" to see the dependencies for each package.
How can I determine what is causing the conflict, and how could I resolve it, given that conda is not naming a second package in its error message? I have seen responses to other "specifications in conflict" questions in which the answer is often "Install the problematic package to a separate python environment", but in this case the new environment could not be created with the package. Starting from a clean Miniconda install did not work either. I suspect something has changed in the Anaconda repository (which would be consistent with the original environment.yml working in the past but not now), but how would I determine if this is the underlying issue?
Thanks.
The underlying issue was a temporary error in the https://repo.continuum.io/pkgs/free/win-64/repodata.json file, which has since been fixed.
For reference for anyone investigating Anaconda dependency conflicts, here are the details of the investigation, and the workaround for this case:
The cause:
The repodata.json file (linked above) is essentially a 'master list' of the dependencies of the various libraries in https://repo.continuum.io/pkgs/free/win-64/. The "conda" command uses this repodata.json file.
While the problem was occurring, the repodata.json file incorrectly listed "_nb_ext_conf" as a dependency for each version of ipywidgets. (The /info/index.json file inside "ipywidgets-4.1.1-py27_0.tar.bz2" did not list "_nb_ext_conf" as a dependency, however I think newer versions of ipywidgets require it.)
The "_nb_ext_conf-0.2.0-py27_0.tar.bz2" and "_nb_ext_conf-0.3.0-py27_0.tar.bz2" files list "notebook >=4.2.0" as a dependency in their info/index.json files.
The info/index.json file in anaconda-4.0.0-np110py27_0.tar.bz2 file (which is used when you specify "anaconda=4.0.0=np110py27_0" in an environment.yml) lists "ipywidgets 4.1.1 py27_0" as a dependency.
Due to the temporary problem in repodata.json, this "ipywidgets 4.1.1 py27_0" caused conda to think "_nb_ext_conf" needed to be installed, thus causing conda to think "notebook >=4.2.0" also needed to be installed.
But the info/index.json file in anaconda-4.0.0-np110py27_0.tar.bz2 file also specifies that the specific version "notebook 4.1.0 py27_2" must be installed.
The conflicting requirements for "notebook" versions (4.1.0 and >=4.2.0) caused the "specifications were found to be in conflict" error.
The workaround:
First, remove the line "- anaconda=4.0.0=np110py27_0" from the environment.yml file.
Next, replace that line in environment.yml with every library listed in the "depends" section of the info/index.json file from anaconda-4.0.0-np110py27_0.tar.bz2. (Remove the quotation marks, replace the spaces with equals signs, etc. to convert the .json syntax to the environment.yml syntax.)
Finally, remove the "- notebook=4.1.0=py27_2" line from this list.
This new environment.yml file will now list every library which would have been installed by "anaconda=4.0.0=np110py27_0", with the exception of "notebook", but "notebook" will get installed anyway due to the "notebook >=4.2.0" requirement in "_nb_ext_conf" due to "ipywidgets", and/or the "notebook" requirement in "ipywidgets" itself.
Investigative tools:
The command "conda info anaconda=4.0.0=np110py27_0" gives the list of libraries required by the specified package, according to repodata.json. I put this list of libraries into a temporary_environment.yml file. Attempting to create an environment from that temporary_environment.yml file caused conda to specify that "notebook" was involved in the conflict, which gave the hint to try omitting "notebook".
Running "conda info" lists all the libraries currently installed in the active environment. The output for the environment created by temporary_environment.yml was compared to the output from an environment from a computer where "anaconda=4.0.0=np110py27_0" had previously installed successfully. This highlighted "_nb_ext_conf" as one difference.
I created a batch file which ran "conda info" for every library listed in anaconda=4.0.0=np110py27_0, and I looked for instances of "notebook" and "_nb_ext_conf" in the output. This pointed to "ipywidgets" as a suspect.

koji build : 'no package found for libcman.so.3'

I'm using a koji to build a package. In error, it's saying:
Error: Package: pacemaker-cluster-libs-1.1.10-14.el6.x86_64 (build)
Requires: libcman.so.3()(64bit)
What does it exactly mean?
'libcman.so' is in the package 'cluster'. Then I built the clusterlib, and add it into my build, but didn't fix the problem after I put 'cluster' into 'BuildRequires' since another problem 'no package found for cluster' came out.
I think I'm not on the correct track.
I didn't fix it, but did some work around to avoid it. It's '.so', so I don't think the python module I build really need it here. So I commented it out from the spec file. and Just ensure I have these packages installed at the server before I install the new python module.