Setting up OpenCensus to work with Stackdriver - c++

I'm trying to setup OpenCensus for our project, but I'm running into Bazel issues.
error loading package '#com_google_googleapis//google/devtools/cloudtrace/v2': Unable to find package for #com_google_googleapis_imports//:imports.bzl: The repository '#com_google_googleapis_imports' could not be resolved. and referenced by '#io_opencensus_cpp//opencensus/exporters/trace/stackdriver:stackdriver_exporter'
This happens when trying to use the version at HEAD. Does anyone know how to fix this? Googleapis indeed does not seem to have any file named imports.bzl.

So for people that run into this, the problem was me missing the GoogleAPIs repo. This is the final import I ended up with.
# googleapis
http_archive(
name = "com_google_googleapis",
sha256 = "0744d1a1834ab350126b12ebe2b4bb1c8feb5883bd1ba0a6e876cb741d569994",
strip_prefix = "googleapis-bcc476396e799806d3355e87246c6becf6250a70",
urls = ["https://github.com/googleapis/googleapis/archive/bcc476396e799806d3355e87246c6becf6250a70.tar.gz"],
)
load("#com_google_googleapis//:repository_rules.bzl", "switched_rules_by_language")
switched_rules_by_language(
name = "com_google_googleapis_imports",
cc = True,
grpc = True,
)
# opencensus
http_archive(
name = "io_opencensus_cpp",
sha256 = "193ffb4e13bd7886757fd22b61b7f7a400634412ad8e7e1071e73f57bedd7fc6",
strip_prefix = "opencensus-cpp-04ed0211931f12b03c1a76b3907248ca4db7bc90",
urls = ["https://github.com/census-instrumentation/opencensus-cpp/archive/04ed0211931f12b03c1a76b3907248ca4db7bc90.tar.gz"],
)
load("#io_opencensus_cpp//bazel:deps.bzl", "opencensus_cpp_deps")
opencensus_cpp_deps()

Related

Bazel not recursively pulling the dependencies of the external dependencies of C++ project

I am trying to use yggdrasil-decision-forests (ydf) as an external dependence of a C++ project. According with ydf's own documentation, one should include the following in the WORKSPACE file:
http_archive(
name = "ydf",
strip_prefix = "yggdrasil_decision_forests-master",
urls = ["https://github.com/google/yggdrasil_decision_forests/archive/master.zip"],
)
load("#ydf//yggdrasil_decision_forests:library.bzl", ydf_load_deps = "load_dependencies")
ydf_load_deps(repo_name = "#ydf")
And the following on the BUILD file:
cc_binary(
name = "main",
srcs = ["main.cc"],
deps = [
"#ydf//yggdrasil_decision_forests/model/learner:learner_library",
"#com_google_absl//absl/status",
],
)
However, this seems no to work and returns the following error:
ERROR: some_path/WORKSPACE:9:1: name 'http_archive' is not defined
ERROR: error loading package '': Encountered error while reading extension file 'yggdrasil_decision_forests/library.bzl': no such package '#ydf//yggdrasil_decision_forests': error loading package 'external': Could not load //external package
Since the http_archiveseems to be the problem, I managed to get further along by using git_repository instead in the WORKSPACE file, like so:
git_repository(
name = "ydf",
remote = "https://github.com/google/yggdrasil-decision-forests.git",
branch = "0.1.3",
)
load("#ydf//yggdrasil_decision_forests:library.bzl", ydf_load_deps = "load_dependencies")
ydf_load_deps(repo_name = "#ydf")
And slightly changing the BUILD file like so, since the functions I intend to use are under the model:all_models target:
cc_library(
name = "models",
srcs = ["models.cpp"],
hdrs = ["models.h"],
deps = [
"#ydf//yggdrasil_decision_forests/model:all_models",
]
)
However, when I run bazel build :models with this configuration, I get the following error:
ERROR: some_path/BUILD:1:11: error loading package '#ydf//yggdrasil_decision_forests/model': in .cache/external/ydf/yggdrasil_decision_forests/utils/compile.bzl: in /some_path/.cache/external/com_google_protobuf/protobuf.bzl: Unable to find package for #rules_python//python:defs.bzl: The repository '#rules_python' could not be resolved. and referenced by '//:models'
Thus, from what I gathered, it seems that when I run build on my project, Bezel is not recursively pulling the dependencies of the package I am trying to use. This seems even more so the case, since if I clone the ydf and build the model:all_models target, all goes well. How can I force bazel to recursively pull the dependencies of the external dependencies that I am trying to use?

How can I get reports from Google Cloud Storage using the Google's API

I have to create a program that get informations on a daily basis about installations of a group of apps on the AppStore and the PlayStore.
For the PlayStore, using Google Cloud Storage I followed the instructions on this page using the client library and a Service Account method and the Python code example :
https://support.google.com/googleplay/android-developer/answer/6135870?hl=en&ref_topic=7071935
I slightly changed the given code to make it work since documentation looks not up-to-date. I made it possible to connect to the API and it seems to connect correctly.
My problem is that I don't understand what object I get and how to use it. It's not a report it just looks like files properties in a dict.
This is my code (private data "hidden") :
import json
from httplib2 import Http
from oauth2client.service_account import ServiceAccountCredentials
from googleapiclient.discovery import build
client_email = '************.iam.gserviceaccount.com'
json_file = 'PATH/TO/MY/JSON/FILE'
cloud_storage_bucket = 'pubsite_prod_rev_**********'
report_to_download = 'stats/installs/installs_****************_202005_app_version.csv'
private_key = json.loads(open(json_file).read())['private_key']
credentials = ServiceAccountCredentials.from_json_keyfile_name(json_file, scopes='https://www.googleapis.com/auth/devstorage.read_only')
storage = build('storage', 'v1', http=credentials.authorize(Http()))
supposed_to_be_report = storage.objects().get(bucket=cloud_storage_bucket, object=report_to_download).execute()
When I print the supposed_to_be_report - which is a dictionary- I only get what I understand as Metadata about he report like this:
{'kind': 'storage#object', 'id': 'pubsite_prod_rev_***********/stats/installs/installs_****************_202005_app_version.csv/1591077412052716',
'selfLink': 'https://www.googleapis.com/storage/v1/b/pubsite_prod_rev_***********/o/stats%2Finstalls%2Finstalls_*************_202005_app_version.csv',
'mediaLink': 'https://storage.googleapis.com/download/storage/v1/b/pubsite_prod_rev_***********/o/stats%2Finstalls%2Finstalls_****************_202005_app_version.csv?generation=1591077412052716&alt=media',
'name': 'stats/installs/installs_***********_202005_app_version.csv',
'bucket': 'pubsite_prod_rev_***********',
'generation': '1591077412052716',
'metageneration': '1',
'contentType': 'text/csv;
charset=utf-16le', 'storageClass': 'STANDARD', 'size': '378', 'md5Hash': '*****==', 'contentEncoding': 'gzip'......
I am not sure I'm using it correctly. Could you please explain me where am I wrong and/or how to get installs reports correctly ?
Thanks.
I can see that you are using googleapiclient.discovery client, this is not an issue, but the recommended way to access Google Cloud APIs programmatically is by using the client libraries.
Second, you are just retrieving the object's metadata. You can download the object to have access to the file contents, this is a sample using the client library.
from google.cloud import storage
def download_blob(bucket_name, source_blob_name, destination_file_name):
"""Downloads a blob from the bucket."""
# bucket_name = "your-bucket-name"
# source_blob_name = "storage-object-name"
# destination_file_name = "local/path/to/file"
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(source_blob_name)
blob.download_to_filename(destination_file_name)
print(
"Blob {} downloaded to {}.".format(
source_blob_name, destination_file_name
)
)
Sample taken from official docs.

How to download a file in Bazel from a BUILD file?

Is there a way to download a file in Bazel directly from a BUILD file? I know I can probably use wget and enable networking, but I'm looking for a solution that would work with bazel fetch.
I have a bunch of files to download that are going to be consumed by just a single package. It feels wrong to use the standard approach of adding a http_file() rule in WORKSPACE at the monorepo root. It would be decoupled from the package and it would pollute a totally unrelated file.
Create a download.bzl and load it in your WORKSPACE file
WORKSPACE:
load("//my_project/my_sub_project:download.bzl", "downlad_files")
load("#bazel_skylib//rules:copy_file.bzl", "copy_file")
download_files()
download.bzl:
BUILD_FILE_CONTENT_some_3d_model = """
filegroup(
name = "some_3d_model",
srcs = [
"BMW_315_DA2.obj",
],
visibility = ["//visibility:public"],
)
"""
def download_files():
http_archive(
name = "some_3d_model",
build_file_content = BUILD_FILE_CONTENT_some_3d_model,
#sha256 = "...",
urls = ["https://vertexwahn.de/lfs/v1/some_3d_model.zip"],
)
copy_file(
name = "copy_resources_some_3d_model",
src = "#some_3d_model",
out = "my/destination/path/some_file.obj",
)

output 'external/name/x/lib/lib.so' was not created using bazel make

I was trying to follow the example provided by Building Makefile using bazel
post to build an external package in envoy. In the WORKSPACE file I added the following:
new_git_repository(
name = "name",
remote = "remote.git",
build_file = "//foo/bazel/external:x.BUILD",
)
And foo/bazel/external/x.BUILD has the following contents:
load("#rules_foreign_cc//tools/build_defs:make.bzl", "make")
filegroup(
name = "m_srcs",
srcs = glob(["code/**"]),
)
make(
name = "foo_bar",
make_commands = ["make lib"],
lib_source = ":m_srcs",
shared_libraries = ["lib.so"],
)
and I set the visibility in foo/bazel/BUILD as package(default_visibility = ["//visibility:public"])
On executing bazel build -s #name//:foo_bar, I get the error that external/name/x/lib/lib.so was not created.
I checked the bazel-bin/external/name/x/logs/GNUMake.log and make completes successfully. I see that BUILD_TMPDIR directory has created lib.so. I think it should have been copied to EXT_BUILD_DEPS/lib, but I am not sure why it was not copied. Would appreciate any tips to debug the error.
Edited make command to manually copy the lib to expected folder - make_commands = ["make libs; cp lib.so $INSTALLDIR/lib/lib.so"]

py2app - preserve directory structure

I would like to make a mac os x app for distributing my program.
The problem is that current script puts "images/Diagram.png" and other files under "images" folder to "Resources" but not to "Resources/images" as expected
What can I change in this setup.py part in order to put png files under images?
mainscript, ico, icns = "aproxim.py", "sum.ico", "sum.icns"
files = ["images/Diagram.png", "images/document.png", "images/Exit.png", "images/floppydisc.png",
"images/folder.png", "images/Info.png", "images/settings.png"]
scripts = ["app_settings_dialog", "app_window", "approximator", "approximator2", "data_format",
"easy_excel", "functions", "main_window", "settings_dialog",
"util_data", "util_parameters"]
description = "Approximator is program for experimental data approximation and interpolation (20 dependencies for select)"
common_options = dict(name = "Approximator", version = "1.7", description = description)
if sys.platform == 'darwin':
setup(
setup_requires = ['py2app'],
app = [mainscript],
options = dict(py2app = dict(includes = scripts, resources = files, iconfile = icns)),
**common_options)
It is easy, hope it is useful for other Python developers:
resources = [('images', files), ('', [ico])]