Failed to connect to center.conan.io - c++

I am trying to use this github template to set up a project. However, when trying to find dependencies, I obtain the following error message:
catch2/2.13.7: Not found in local cache, looking in remotes...
ERROR: Failed requirement 'catch2/2.13.7' from 'conanfile.py (CppStarterProject/0.1)'
ERROR: HTTPSConnectionPool(host='center.conan.io', port=443): Max retries exceeded with url: /v1/ping (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),))
Unable to connect to cci=https://center.conan.io
1. Make sure the remote is reachable or,
2. Disable it by using conan remote disable,
Then try again.
CMake Error at build/conan.cmake:631 (message):
Conan install failed='1'
Call Stack (most recent call first):
cmake/Conan.cmake:47 (conan_cmake_install)
CMakeLists.txt:66 (run_conan)
When trying to ping center.conan.io It seems to be online and https://downforeveryoneorjustme.com/center.conan.io?proto=https does say that it is up.
What I am missing?

Conan seems to sometimes not update it's certificate authorities (probably if you've manually edited them). Make sure you are using the latest Conan version and delete cacert.pem from the .conan directory in your home folder, re-running Conan should then generate an updated cacert.pem with the latest authorities.

Related

How to fix build error because Install pods using EAS build EXPO?

I'm trying to build an IOS app using EAS service expo, but when I try to build, there is an error while install pods on build details expo. And this is the error.
Unable to find a specification for `UMTaskManagerInterface` depended upon by `EXLocation`
I try to install pod by npm i pod-install but still get error. Is this because I build on Windows, or what should I do to fix this error? I also try to find the error in GitHub forum, and it's say adding path pod in ios/Podfile which I can't find it in my expo project. Where is ios/Podfile file located in expo project?
This is the full error
Installing pods
Using Expo modules
Auto-linking React Native modules for target `MMS`: RNCAsyncStorage, RNCCheckbox, RNDateTimePicker, RNGestureHandler, RNPermissions, RNReanimated, RNScreens, react-native-safe-area-context, and react-native-viewpager
Analyzing dependencies
Fetching podspec for `DoubleConversion` from `../node_modules/react-native/third-party-podspecs/DoubleConversion.podspec`
Fetching podspec for `RCT-Folly` from `../node_modules/react-native/third-party-podspecs/RCT-Folly.podspec`
Fetching podspec for `glog` from `../node_modules/react-native/third-party-podspecs/glog.podspec`
Adding spec repo `trunk` with CDN `https://cdn.cocoapods.org/`
CocoaPods 1.11.2 is available.
To update use: `sudo gem install cocoapods`
For more information, see https://blog.cocoapods.org and the CHANGELOG for this version at https://github.com/CocoaPods/CocoaPods/releases/tag/1.11.2
[!] Unable to find a specification for `UMTaskManagerInterface` depended upon by `EXLocation`
You have either:
* out-of-date source repos which you can update with `pod repo update` or with `pod install --repo-update`.
* mistyped the name or version.
* not added the source repo that hosts the Podspec to your Podfile.
[stderr] [!] `<PBXResourcesBuildPhase UUID=`13B07F8E1A680F5B00A75B9A`>` attempted to initialize an object with an unknown UUID. `2EE81B3C866A4A13B6460929` for attribute: `files`. This can be the result of a merge and the unknown UUID is being discarded.
pod exited with non-zero code: 1
Edit*
i try using expo build:ios-> archive is working perfectly. why using eas build -p ios i got that error?

Build C++ project with Bazel offline (without internet connection)

I try to build the Bazel C++ Build Tutorial from the Bazel homepage (Getting Started) with this command but without any connection to the internet: bazel build //main:hello-world
The Jenkins Server will not have any connection to the internet so I can't do prefetching or similar. Is there a way how to prepare my C++ project on another computer and transfer the dependencies to the Jenkins Server to do it offline? How? I would just need what I got from the following error message respectively get the tutorial running:
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Repository rules_cc instantiated at:
/DEFAULT.WORKSPACE.SUFFIX:267:6: in <toplevel>
C:/users/XXX/_bazel_XXX/dq2p42jq/external/bazel_tools/tools/build_defs/repo/utils.bzl:201:18: in maybe
Repository rule http_archive defined at:
C:/users/XXX/_bazel_XXX/dq2p42jq/external/bazel_tools/tools/build_defs/repo/http.bzl:336:31: in <toplevel>
WARNING: Download from https://mirror.bazel.build/github.com/bazelbuild/rules_cc/archive/b1c40e1de81913a3c40e5948f78719c28152486d.zip failed: class java.io.IOException Unable to tunnel through proxy. Proxy returns "HTTP/1.1 501 Not Implemented"
WARNING: Download from https://github.com/bazelbuild/rules_cc/archive/b1c40e1de81913a3c40e5948f78719c28152486d.zip failed: class java.io.IOException Unable to tunnel through proxy. Proxy returns "HTTP/1.1 501 Not Implemented"
ERROR: An error occurred during the fetch of repository 'rules_cc':
Traceback (most recent call last):
File "C:/users/XXX/_bazel_XXX/dq2p42jq/external/bazel_tools/tools/build_defs/repo/http.bzl", line 111, column 45, in _http_archive_impl
download_info = ctx.download_and_extract(
Error in download_and_extract: java.io.IOException: Error downloading [https://mirror.bazel.build/github.com/bazelbuild/rules_cc/archive/b1c40e1de81913a3c40e5948f78719c28152486d.zip, https://github.com/bazelbuild/rules_cc/archive/b1c40e1de81913a3c40e5948f78719c28152486d.zip] to C:/users/XXX/_bazel_XXX/dq2p42jq/external/rules_cc/temp1363696983472254851/b1c40e1de81913a3c40e5948f78719c28152486d.zip: Unable to tunnel through proxy. Proxy returns "HTTP/1.1 501 Not Implemented"
ERROR: Error fetching repository: Traceback (most recent call last):
File "C:/users/XXX/_bazel_XXX/dq2p42jq/external/bazel_tools/tools/build_defs/repo/http.bzl", line 111, column 45, in _http_archive_impl
download_info = ctx.download_and_extract(
Error in download_and_extract: java.io.IOException: Error downloading [https://mirror.bazel.build/github.com/bazelbuild/rules_cc/archive/b1c40e1de81913a3c40e5948f78719c28152486d.zip, https://github.com/bazelbuild/rules_cc/archive/b1c40e1de81913a3c40e5948f78719c28152486d.zip] to C:/users/XXX/_bazel_XXX/dq2p42jq/external/rules_cc/temp1363696983472254851/b1c40e1de81913a3c40e5948f78719c28152486d.zip: Unable to tunnel through proxy. Proxy returns "HTTP/1.1 501 Not Implemented"
ERROR: Skipping '//main:hello-world': no such package '#rules_cc//cc': java.io.IOException: Error downloading [https://mirror.bazel.build/github.com/bazelbuild/rules_cc/archive/b1c40e1de81913a3c40e5948f78719c28152486d.zip, https://github.com/bazelbuild/rules_cc/archive/b1c40e1de81913a3c40e5948f78719c28152486d.zip] to C:/users/XXX/_bazel_XXX/dq2p42jq/external/rules_cc/temp1363696983472254851/b1c40e1de81913a3c40e5948f78719c28152486d.zip: Unable to tunnel through proxy. Proxy returns "HTTP/1.1 501 Not Implemented"
WARNING: Target pattern parsing failed.
ERROR: no such package '#rules_cc//cc': java.io.IOException: Error downloading [https://mirror.bazel.build/github.com/bazelbuild/rules_cc/archive/b1c40e1de81913a3c40e5948f78719c28152486d.zip, https://github.com/bazelbuild/rules_cc/archive/b1c40e1de81913a3c40e5948f78719c28152486d.zip] to C:/users/XXX/_bazel_XXX/dq2p42jq/external/rules_cc/temp1363696983472254851/b1c40e1de81913a3c40e5948f78719c28152486d.zip: Unable to tunnel through proxy. Proxy returns "HTTP/1.1 501 Not Implemented"
INFO: Elapsed time: 30.974s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
currently loading: main
Update:
I tried prefetching and it doesn't work. What I did:
run bazel fetch //...
Copy the prefetched data from bazel info output_base directory from local to jenkins server (I had to recreate one sym link to the install dir).
I run bazel build --fetch=false //main:hello-world on jenkins without refatching. Now I get following error:
Loading:
Loading: 0 packages loaded
WARNING: /DEFAULT.WORKSPACE:1:17: External repository 'bazel_tools' is not up-to-date and fetching is disabled. To update, run the build without the '--nofetch' command line option.
ERROR: error loading package '': Every .bzl file must have a corresponding package, but '#bazel_tools//tools/build_defs/repo:http.bzl' does not have one. Please create a BUILD file in the same or any parent directory. Note that this BUILD file does not need to do anything except exist.
INFO: Elapsed time: 0.298s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
I found one solution for offline building a simple C++ projects with bazel.
First you have to download the rules_cc and rules_java libs, because these where the external dependent libs in the simple Bazel C++ tutorial project. I found no release for rules_cc so I downloaded the zip file from the url which was mentioned in the error message on the console when I was trying to use bazel build offline: https://mirror.bazel.build/github.com/bazelbuild/rules_cc/archive/b1c40e1de81913a3c40e5948f78719c28152486d.zip. The sha key was also mentioned in the error message when you try to use the lib locally with a wrong sha key (see below). The rules_java lib can be downloaded here: https://github.com/bazelbuild/rules_java/releases. The WORKSPACE specification is also mentioned there.
Then you have to add the following to your WORKSPACE file. Be aware, that you have to point to your local copy of the archive files for rules_cc and rules_java. And for rules_cc you have to mention in strip_prefix the root path of the archive file (the first and only root directory in the .zip file). The rules_java has no root directory:
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_cc",
urls = ["file:C:/tmp/b1c40e1de81913a3c40e5948f78719c28152486d.zip"],
strip_prefix = "rules_cc-b1c40e1de81913a3c40e5948f78719c28152486d",
sha256 = "d0c573b94a6ef20ef6ff20154a23d0efcb409fb0e1ff0979cec318dfe42f0cdd",
)
http_archive(
name = "rules_java",
url = "file:C:/tmp/rules_java-4.0.0.tar.gz",
sha256 = "34b41ec683e67253043ab1a3d1e8b7c61e4e8edefbcad485381328c934d072fe",
)
You can run bazel fetch //... to fetch data locally on the machine, which has access to the internet. Then you can just copy prefetched data: from bazel info output_base directory on your local PC to bazel info output_base in the jenkins job.

How to fix urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)> Error

I'm currently running Sentry in Kubernetes with auto certificate generation using let's encrypt and cert-manager. When Sentry attempts to send an error to the sentry server, the following error is thrown:
urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)> (url: https://example.host.com/)
I have verified that the correct python packages for 2.7.15 have been installed. Packages include certifi, urllib2 along with the dependencies.
Turning off TLS Verification works, but this is a last resort. Security is very important even though this is an internally hosted service.
It has been my experience that even the most up-to-date ca-certificates packages sometimes don't contain all 3 Let's Encrypt certificates. The solution(?) is to download them into the "user-controlled" certificate directory (often /usr/local/share/ca-certificates) and then re-run update-ca-certificates:
# the first one very likely is already in your chain,
# but including it here won't hurt anything
for i in isrgrootx1.pem.txt lets-encrypt-x3-cross-signed.pem.txt letsencryptauthorityx3.pem.txt
do
curl -vko /usr/local/share/ca-certificates/`basename $i .pem.txt`.crt \
https://letsencrypt.org/certs/$i
done
update-ca-certificates
The ideal outcome would be to do that process for every Node in your cluster, and then volume mount the actual ssl directory into the containers, so every container benefits from the latest certificates. However, I would guess just doing it in the affected containers could work, too.
yum update ca-certificates.noarch did the trick for me.

appcfg.py request_logs certificate verify failed (_ssl.c:661)

We've been using appcfg.py request_logs to download GAE logs, every once in a while it throws the error:
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661)
But after a few times trying it works out, sometimes also it works after updating gcloud using gcloud components update. We thought it might be some network throttling issue of some kind and didn't give it enough thought. Lately though, we're trying to figure out what is causing this.
The full command we use is:
appcfg.py request_logs -A testapp --version=20180321t073239 --severity=0 all_logs.log --append --no_cookies
It seems the error is related to httplib2 library, but since it is part of the appcfg.py calls we're not sure we should tamper with something within its calls
Versions:
Python 2.7.13
Google Cloud SDK 196.0.0
app-engine-python 1.9.67
This has become more persistent now and I couldn't download logs for a few days now no matter how many times I try.
Looking at the download logs command I tried the same command again but without the --no_cookies flag to see what would happen.
appcfg.py request_logs -A testapp --version=20180321t073239 --severity=0 all_logs.log --append
I got the error:
Error 403: --- begin server output ---
You do not have permission to modify this app (app_id=u'e~testapp').
--- end server output ---
Which lead me to the answer provided here https://stackoverflow.com/a/34694577/1394228 by #ninjahoahong. This worked for me and logs where downloaded from first trial in case someone faces the same issue
There's also this Google Group post which I didn't try but seems like it does the same thing.
Not sure if removing the file ~/.appcfg_oauth2_tokens would have other effects, yet to find out.
Update:
I also found out that my httplib2 located at /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/httplib2 was version = "0.7.5", I upgraded it to version = '0.11.3' using target location(directory) upgrade command:
sudo pip2 install --upgrade httplib2 -t /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/httplib2/

Installing php5-openssl on Amazon SUSE SE 11 SP3

I have a SUSE ES 11 SP3 Amazon EC2 instance and I need to install the php5-openssl package, but I'm getting in trouble when using "zypper install". I should have upgraded the insance infrastructure update (more details here), but I missed the deadline and I am now with a useless instance, because I can not install anything .
Whenever I use zypper install I got:
Refreshing service 'susecloud'.
Problem retrieving the repository index file for service 'susecloud':
Download (curl) error for 'http://sa-east-1-ec2-update.susecloud.net/repo/repoindex.xml?cookies=0':
Error code: Connection failed
Error message: Couldn't resolve host 'sa-east-1-ec2-update.susecloud.net'
Check if the URI is valid and accessible.
Error building the cache:
[|] Valid metadata not found at specified URL(s)
Warning: Disabling repository 'php' because of the above error.
Download (curl) error for 'http://sa-east-1-ec2-update.susecloud.net/repo/update/SLE11-SDK-SP3-Pool/sle-11-x86_64/repodata/repomd.xml':
Error code: Connection failed
Error message: Couldn't resolve host 'sa-east-1-ec2-update.susecloud.net'
Abort, retry, ignore? [a/r/i/? shows all options] (a): a
Problem retrieving files from 'SLE11-SDK-SP3-Pool'.
Download (curl) error for 'http://sa-east-1-ec2-update.susecloud.net/repo/update/SLE11-SDK-SP3-Pool/sle-11-x86_64/repodata/repomd.xml':
Error code: Connection failed
Error message: Couldn't resolve host 'sa-east-1-ec2-update.susecloud.net'
Please see the above error message for a hint.
Warning: Disabling repository 'SLE11-SDK-SP3-Pool' because of the above error.
Download (curl) error for 'http://sa-east-1-ec2-update.susecloud.net/repo/update/SLE11-SDK-SP3-Updates/sle-11-x86_64/repodata/repomd.xml':
Error code: Connection failed
Error message: Couldn't resolve host 'sa-east-1-ec2-update.susecloud.net'
If I try updating the update infrascructure I got:
Adding repository 'tmp_instance_infrastructure_upgrade' [done]
Repository 'tmp_instance_infrastructure_upgrade' successfully added
Enabled: Yes
Autorefresh: No
GPG check: Yes
URI: dir:///usr/share/instance_infrastructure_upgrade/repo
Repository 'tmp_instance_infrastructure_upgrade' priority has been set to 1.
Retrieving repository 'tmp_instance_infrastructure_upgrade' metadata [done]
Building repository 'tmp_instance_infrastructure_upgrade' cache [done]
Specified repositories have been refreshed.
Refreshing service 'susecloud'.
Problem retrieving the repository index file for service 'susecloud':
Download (curl) error for 'http://sa-east-1-ec2-update.susecloud.net/repo/repoindex.xml?cookies=0':
Error code: Connection failed
Error message: Couldn't resolve host 'sa-east-1-ec2-update.susecloud.net'
Check if the URI is valid and accessible.
Error building the cache:
[|] Valid metadata not found at specified URL(s)
Warning: Disabling repository 'php' because of the above error.
Download (curl) error for 'http://sa-east-1-ec2-update.susecloud.net/repo/update/SLE11-SDK-SP3-Pool/sle-11-x86_64/repodata/repomd.xml':
Error code: Connection failed
Error message: Couldn't resolve host 'sa-east-1-ec2-update.susecloud.net'
...
--> Updating packages failed
I tried to install each php5-openssl dependency, downloading each RPM and installing with rpm -i, but every time I resolve one dependency another appears. I also tried the suse forum (post here) but with no success until now.
So my questions are:
Is there some way to fix zypper repositories manually? Even if only to install the php5-openssl package.
Is there some way to use RPM to manage the dependencies for php5-openssl?
Is there another alternative to install php5-openssl in my suse instance?
in short:
you cannot fix these repositories; since they don't exist anymore (try browsing to http://sa-east-1-ec2-update.susecloud.net/).
rpm cannot look for dependencies because rpm does not use repositories (that's where zypper comes in). You can look for rpms on the internet, and then install the manually using rpm; but rpm won't locate the missing rpms.
Yes; as said in point 2; you can look for the rpms; download them and install them manually; or else you can even look for repositories that contain the rpms you want, and add that repository. However be careful with that; since those repositories probably aren't made for Amazon EC2. If you do this; try to find a repository as close as possible to suse 11 EC2...