I am new to Hyperledger. I have defined my model for the network and successfully deployed it locally(over my system). Everything is working as expected. I want to replicate the same and make it public so that other team members can use it too.
How can I deploy the same over cloud hosting services like AWS or OpenStack?
I just want that the blockchain services should be available publicly.
IBM Cloud offers a way to do this if you have an IBM Cloud account.
If you do not have and do not want an account, then you could at least look at the scripts being used for deploying to Kubernetes.
IBM Cloud Sandbox
The scripts themselves can be downloaded/cloned with this command: git clone https://github.com/IBM-Blockchain/ibm-container-service
Related
I am very new to GCP and I would greatly appreciate some help here ...
I have a docker containerized application that runs in AWS/Azure but needs to access gcloud SDK as well as through "Google cloud client libraries".
what is the best way to setup gcloud authentication from an application that runs outside of GCP?
In my Dockerfile, I have this (cut short for brevity)
ENV CLOUDSDK_INSTALL_DIR /usr/local/gcloud/
RUN curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:$CLOUDSDK_INSTALL_DIR/google-cloud-sdk/bin
RUN gcloud components install app-engine-java kubectl
This container is currently provisioned from an Azure app service & AWS Fargate. When a new container instance is spawned, we would like it to be gcloud enabled with a service account attached already so our application can deploy stuff on GCP using its deployment manager.
I understand gcloud requires us to run gcloud auth login to authenticate to your account. How we can automate the provisioning of our container if this step has to be manual?
Also, from what I understand, for cloud client libraries, we can store the path to service account key json file in an environment variable (GOOGLE_APPLICATION_CREDENTIALS). So this file either has to be stored inside the docker image itself OR has to be mounted from an external storage at the very least?
How safe is it to store this service account key file in an external storage. What are the best practices around this?
There are two main means of authentication in Google Cloud Platform:
User Accounts: Belong to people, represent people involved in your project and they're associated to a Google Account
Service Accounts: Used by an application or an instance.
Learn more about their differences here.
Therefore, you are not required to use the command gcloud auth login to perform gcloud commands.
You should be using gcloud auth activate-service-account instead, along with the --key-file=<path-to-key-file> flag, which will allow you to authenticate without the need of signing into a Google Account with access to your project every time you need to call an API.
This key should be stored securely, preferably encrypted in the platform of your choice. Learn how to do it in GCP here following these steps as an example.
Take a look at these useful links for storing secrets in Microsoft Azure and AWS.
On the other hand, you can deploy services to GCP programmatically either using Cloud Libraries with your programming language of choice, or using Terraform is very intuitive if you prefer to do so over using the Google Cloud SDK through the CLI.
Hope this helped.
I have my source code in code commit and my new client is with GCP. They wanted to connect code-commit from google cloud-build, is there any option for that ?
Given the fact that GCP and AWS are competitor cloud providers I would say that you will not find a way to trigger Google Cloud Build from AWS CodeCommit, which is what I believe you mean with "integrate" both products.
What I would do in your scenario is replicate you CodeCommit repository in it's equivalent in GCP, which is Google Cloud Source Repositories. You can find a tutorial for how to setup
Build Triggers from Cloud Source Repositories in this documentation. Another option is pushing a container ready to be deployed into Cloud Registry and deploying that instead, you can follow these steps for that.
In the examples of Amazon Chime, for instance here https://github.com/aws-samples/amazon-chime-sdk-classroom-demo, they imply that it should be deployed and run on a AWS server via Cloud9. However, I want to deploy and run it on some other VPS such as a digitalocean or linode server.
The main question: can that be done at all, it is supported?
If yes, how? General pointers. Which example should I use and where is it described at all?
Eventually what I want is this:
Say, I have a teaching website that I run on digital ocean or linode. Not on AWS. I want to be able to use Amazon Chime in a way that my users will go to my website and connect to a video class from my website as well
The Chime service would need run on AWS, but you can have a link to the Chime service endpoint from any website hosted anywhere else.
To use the Amazon Chime web application, your students would sign in to https://app.chime.aws/ from your web browser. You would have that link on your website.
See https://docs.aws.amazon.com/chime/latest/ug/chime-web-app.html
Note about the demo. The demo shows how to use the Amazon Chime SDK to build an online classroom in Electron and React. If you are are using that deployment method you can host the React app anywhere under a private domain on any host. That app will run any where, while connecting back to the AWS service endpoint.
Resources would be deployed in AWS. No way around it.
Deployment script can be run from your own laptop, Cloud9 and/or any other Linux server. You just need to be able to run git clone and script/deploy.js.
You'll also need to make that environment is configured with appropriate AWS credentials. Cloud9 would have these credentials out of the box. For any other environment (your laptop/Digital Ocean VM etc.) would need to get AWS Account Ket/Secret pair and use aws config to enable them.
In my company we are using Google Cloud to develop our application and we'd like to use Cloud Build for CD/CI. The problem is that our Bitbucket server is self hosted, not at Bitbucket's cloud. When we try to add a new repository, Google Cloud redirects us to Bitbucket's cloud and I don't find any way to connect our company repository.
Is this currently possible?
I'm stucked in this screen:
Thanks
Cloud Build's Bitbucket Server and Bitbucket Data Center integration is now in GCP. We can build repositories from Bitbucket Server and Bitbucket Data Center, including on-premises instances.
https://cloud.google.com/build/docs/automating-builds/build-repos-from-bitbucket-data-center
I have a MEAN stack application which needs to be cloud hosted. The management needs it to be portable and that brought me to checkout cloud foundry. However, even for cloud foundry there are many provider options like CF on Azure, PCF , IBM Blue Mix and so on. However, I am not able to understand the differences between them. Can you please point me to something that helps me understand the differences between these various providers and make a decision? Also whats the difference between Azure PaaS and Azure Cloud Foundry?
Cloud Foundry is an open source PaaS and because it's open source, you have the freedom to either:
host it yourself on a variety of IaaS
use a public, multi-tenant Cloud Foundry service
have a provider host a private CF for you
This is very similar to hosting options for Kubernetes for example (even though it's worth mentioning that Cloud Foundry predates Kubernetes by a couple of years).
"Pivotal Cloud Foundry" is a commercial distribution of Cloud Foundry targeted at large enterprises. It has a couple of features not found in the open source version, mostly related to deployment automation and integration of application services like MySQL etc. Pivotal is also a main sponsor of development work on the open source version of Cloud Foundry. PCF on Azure is kind of a "template service" that allows you to quickly deploy a private PCF installation on Azure, so it's to some degree a combination of hosting options 1) and 3).
You specifically asked about the difference between various public Cloud Foundry service providers. Here's the most important points:
data center location and related privacy concerns (PWS runs on AWS US locations for example)
choice of managed application services and plans (e.g. MySQL, PostgreSQL etc.)
pricing for apps and application services
performance (available CPU per Diego Cell on which application containers execute, networking)
Cloud Foundry version and supported features like container-to-container networking or deployment of docker containers
quality and availability of support options, onboarding assistance
availability of legal assurances/contracts you may need, e.g. to comply with EU GDPR rules
Also worth reading: Cloud Foundry explained
Cloud Foundry is an OpenSource PaaS that can run on top of any different IaaS. So you can got to https://github.com/cloudfoundry/cf-deployment and use it to install your own instance of Cloud Foundry on Azure, AWS, GCP, vSphere, OpenStack, SoftLayer ... etc.
PCF is a commercial product from Pivotal based on the OpenSource Cloud Foundry. You buy it and then you install and run it on an IaaS of your choice.
BlueMix is a commercial product from IBM which is also based on OpenSource Cloud Foundry. It is also a set of services based on various IBM products so with BlueMix IBM runs and manages the cloud for you.
Azure PaaS is a set of service from Microsoft for deploying applications which only runs an Azure, while Cloud Foundry can be installed on Azure or other IaaS providers.