How to deal with air gapped builds when using vcpkg - c++

I'm in the process of creating a custom registry hosted in Azure DevOps.
The plan going forward will be to host some third party libraries as well as our own libraries in this custom registry.
Each project will then be using manifests in order to declare all dependencies and their required versions.
So far everything works as expected. I've already created a port out of one of our libraries and I'm currently distributing it via our custom registry.
Now the part I'm unsure how to handle.
At my company we do an "air gapped" build which means the source code is taken to some machine on a private network with no internet connection where the build is performed.
This is of course problematic as the air gapped machine will not have access to the custom ports registry we're hosting on ADO, nor will it have access to the repos hosting those projects we're distributing via our custom registry.
I'm trying to figure out a solution to this issue.
My first thought was to tell the Air Gap team to first clone the required repos to a USB stick. Then we could configure Visual Studio to use overlay-ports which would use the source that was cloned on to the USB stick and a custom port file. I have no idea if this would actually work.
I'm curious what other folks have done who might be in a similar situation?
Does anyone have any ideas on how I could handle this scenario using vcpkg?

Related

Geolocation, Geocoding in C++

I am working on a project or application built in Windows using C++. I would like to seek help on any idea or approach or existing libraries that implements geolocation/geocoding, since I want to limit may C++ Windows application to run on certain region or country only.
Any suggestions, comments will be a great help. Thanks.
It won't be possible to prevent an application from running locally in certain regions. A user can always disconnect from the internet and then you'll have no idea where they're located.
What you could do is to have some of your app logic run in a server, then make requests to your server from the local C++ app. Then you can geolocate based on the IP address of the request, often a standard feature in cloud platforms.
If you do want to explore getting someone's location, you can look at Apple's Core Location or Microsoft's Geolocation namespace.

How can you find out Azure-pipeline image content?

I'm new to Azure-Pipeline and struggling to put together a C++ oriented pipeline that uses camke which properly compiles, run tests and build documentation on Ubuntu, macOS, and Windows.
I managed the macOS and Ubuntu cases rather easily but am struggling with the Windows case not knowing what's installed and what's in system PATH for the given image & container I've selected.
Not being super familiar with the Azure-Platform I'm basically relying on commit-push-run-pipeline every single little change to my YAML file thus wasting time and resources.
I can't imagine that the only way is to blindly try out commands by commit, push and run the pipeline.
I managed to find a basic description of the currently (hopefully) available images here following the included software link for Windows link yoou end up on a comprehensive list of what's supposedly installed (I have some doubts on whether this documentation actually matches the content of the image). Calling some of those tools like cmake and choco, present in the above list, failed. Whether or not they're actually installed and in system PATH I have no idea.
Q1: Is there any way to locally test out an Azure-Pipeline YAML?
Q2: Is there any way to figure what is actually installed on a given image/container (without issuing a DIR /s from the root folder??)
Q3: Is it possible to connect to a running container (or is it a VM???) instance and directly tinker with it?
Q4: Alternatively, is it possible to run such an image locally (Docker)? Does it imply execution on a Windows machine or is that a standalone VM image?
EDIT: Found out about this question, although doesn't quite answer mine: Is there a tool to validate an Azure DevOps Pipeline locally?
Q1: Is there any way to locally test out an Azure-Pipeline YAML?
The answer is yes. You could create your private agent to execute the Azure-Pipeline YAML.
Self-hosted agents
Q2: Is there any way to figure what is actually installed on a given
image/container (without issuing a DIR /s from the root folder??)
Just as you know, we could check the document Software for the software installed on the agent. If you want to know the install the path of some software, you could check the debug log from the build task. For example, cmake. We could check the build log from the cmake task:
Q3: Is it possible to connect to a running container (or is it a
VM???) instance and directly tinker with it?
For the hosted agent, I am afraid the answer is not.
Q4: Alternatively, is it possible to run such an image locally
(Docker)? Does it imply execution on a Windows machine or is that a
standalone VM image?
The answer is yes, we could Run a self-hosted agent in Docker. And it imply execution on a Windows machine.

Connecting to Google Cloud Spanner from DBVisualizer

I've created a test cloud spanner instance and database have have been attempting to connect to it through DBVisualizer.
I have authenticated using the gcloud auth command, and have the driver set up within DBVisualizer.
The connection string I'm using is:
jdbc:cloudspanner://;Project=testapp;Instance=test-instance;Database=test-spanner;PvtKeyPath=/Users/userhome/.config/gcloud/application_default_credentials.json
However, when I try to connect I get the following error:
[Simba][SpannerJDBCDriver](100004) Failed to connect to Spanner: No NameResolverProviders found via ServiceLoader, including for DNS. This is probably due to a broken build. If using ProGuard, check your configuration
Is there anyway to get a connection from a DB Management Tool such as DB Visualizer?
I found a solution on MacOS at least. Copy the CloudSpannerJDBC42.jar and google-cloud-spanner-0.9.4-beta.jar to DBvisualizers lib folder. In the case of MacOS the location is:
/Applications/DbVisualizer.app/Contents/java/app/lib
Restart DBVisualizer and then you can connect.
I don't think DBVisualizer supports Cloud Spanner right now. See their documentation: https://www.dbvis.com/features/
As the product is still pretty new publicly, we'll hopefully be seeing more 3rd party support in the coming months.
I've run into similar problems with the driver supplied by Google, so I decided to develop my own. The driver has both a 'thin' version and a 'fat' version. The thin version is intended as a dependency to be included in Java applications you develop yourself. The thick version can be used for standalone purposes, such as these kind of connections. The thick version (and other) can be found here: https://github.com/olavloite/spanner-jdbc/releases
More information about the whole driver can be found on my GitHub page.
The driver does work with DBVisualizer. Follow these steps to set it up:
Download the driver and place it in your JRE/lib/ext directory (this is necessary because of dynamic loading of services done by the underlying Google Cloudspanner API). Make sure you place it in the lib/ext directory of the JRE you are actually using with DBVisualizer.
Open DBVisualizer and open Driver Manager. Click on Create a new Driver.
Give it the name Cloudspanner
URL format is jdbc:cloudspanner://localhost;Project=projectId;Instance=instanceId;Database=databaseName;PvtKeyPath=key_file
Driver class is automatically selected.
Close the Driver Manager and make a new connection using the new driver.

Manager local repository

I come from Java world. I was looking for Apache Maven alternative in C++ world. I think I found the correct project. I have few questions and have not managed to find an answer.
Is it possible to manage local repository. Let's say, I work on 5 similar but different projects and this project share mostly the same dependencies. Will each project have it's own dependencies stored inside each project or is there a "system" wide (per user) local repository where dependencies are stored?
Is it possible to "publish" only to local folder so other project can "see" dependent block or it has to go over bii internet repository?
Or am I wrong - about how bii works.
Looks nice project. Keep up the good work.
Right now, projects act as virtualenvs, each project contains and build its dependencies. This is intended for fast evolving libraries. Imagine you have 5 similar projects all depending on the same library A, version 0. While working on one of those projects you can make a modification to A and publish a new version, an API breaking modification. The other 4 projects will continue depending on version 0, and will not break. When you move to those projects you can easily update their dependencies and fix the breakages.
You can share the same library among different projects straight ahead with sym links if working in linux, not working by now in windows.
For very stable, large projects that can be installed system-wide, it could be more convenient to depend on the installed version. CMake allows this very easily via FindXXX(). You can install system wide the binaries with CMake install, or you can even use CMake scripts or biicode python hooks to automatically download and install system wide those libraries. Check, e.g.:http://www.biicode.com/diego/opencvex, OpenCV is managed with a biicode python hook and installed system wide.
At this moment there is no "local" publication, and if you want to share that way among projects, yes, you have to go over the biicode cloud servers, simply with "bii publish".
However, we are transitioning to open-source. We will probably release first the client code, then we will release a server that could be deployed in-house. Not implemented yet, but a future feature is that this server could act as a proxy to the cloud one, you can publish to the local instance, but read from the cloud one. With a local installation of this server, you will be able to publish locally.

Deploy a local webservice on many machines - is it the right strategy?

I was wondering about the best way to deliver private web service instances to lots of users, so the user would always be able to connect to their own offline version of a service, just like running a web service from visual studios while debugging. I was struggling with setting this up in VS2013 even with the many online tutorials, but I am not sure if its not working because it was never supposed to work this way.
I have provided this in-depth explanation of my issue as i am not sure i am going about this in the right way and would appreciate feedback:
Background:
I have a web service to interface with an engine. This deals with the front-end and builds a set of commands for how to make a CAD model. These commands are for controlling the 3rd party CAD software's API. Therefore the engine can be seen to have two main functions -
Build the CAD's API instructions, which can be saved for later
Execution, where it catches the instance of the CAD software
running on the same computer and it builds the model.
The second part is restricted for the general public. Only our in-house users should be able to use it. However, they want to have an otherwise identical front-end and user experience.
The problem is, if they connect to the same engine as the public, which exists on our main server, then the engine will be looking for an instance of the CAD package on the same machine as itself - i.e the server, as stressed in the emboldened point above. What should happen is the engine finds the CAD instance running on the machine that the controlling UI is based on and it uses that for its target. I have spoke to the CAD API support and they say they do not know how to do that.
And so we get to my solution of providing an offline stand alone of my web service on each of the employees computers. This means the front-end will check at the start of the session if a localhost connection is available. If not it will use the main address, which takes it to my server. Otherwise it uses the local engine which will look perform the default behavior of looking for a CAD package on the same machine as itself. Because its locally installed that is now the right machine and it will find the CAD instance of the user successfully.
Final points:
The engine cannot be accessed by the UI directly as i am using
Unity3D for the front-end and there is .Net compatibility issues.
I need a completely self contained version of the software in the
future anyway, so eventually i have to deal with having the engine
accessed locally
I ended up using IISExpress. I got the user to install this and then get them to call a batch file installer i made which sets up the config file and moves my web project to the correct directory.