How to ssh into Digital Ocean's App Platform app? - digital-ocean

We're using digital ocean's cli tool doctl, and would like to ssh into our instances using the same cli tool. We're able to list apps using:
doctl apps list
but cant ssh into apps. Is it even supported by the CLI as of now?

Related

MySQL Server + phpMyadmin on Ubuntu Server 20.04 from google cloud platform, where to start?

I am a new starter for google cloud platform, I started a thing on marketplace which I think will help me install craft-cms, how to start? I mean where can I have the password of the root user on mysql, where is the password of phpadmin, I don't know where to start error I am facing
deployment
You have to finish the installation of MySQL after deploying the solution. You can't login to phpMyAdmin because you have not set up a password for root yet.
I assume you deployed the solution
MySQL Server + phpMyadmin on Ubuntu Server 20.04 and now you have a VM that you can SSH into.
SSH into the VM machine.
run the command sudo mysql_secure_installation to start the MySQL configuration.
Follow the onscreen prompts and reply 'y' to the prompts.
run the following commands to set up a password for root. Be sure to replace 'your_pass_here' with your own password.
sudo mysql
ALTER USER 'root'#'localhost' IDENTIFIED WITH mysql_native_password BY 'your_pass_here';
FLUSH PRIVILEGES;
Now you can log off your SSH session and login to phpMyAdmin using your new password.
If you deployed one of the marketplace solutions, you will have a deployment manifest in the Deployment Manager section of the Google cloud console.
Go to the Google Cloud Platform
https://console.cloud.google.com/
Ensure that you have the correct project selected (that will be the
one you deploy the marketplace solution in the first place) in the
dropdown menu in the top of the screen.
In the search bar type Deployment Manager and select "Google Cloud
Deployment Manager".
Press the button "Go To Cloud Deployment Manager"
You will find listed all the deployments for that project. You
should be able to find your deployment there.
Click on the deployment name and in the next screen you will be able
to find the deployment specifications, usually you will find the
names and passwords for the deployment there.
Here is an example of a deployment as seen in the Deployment Manager:

In a containerized application that runs in AWS/Azure but needs to access GCLOUD commands, what is the best way to setup gcloud authentication?

I am very new to GCP and I would greatly appreciate some help here ...
I have a docker containerized application that runs in AWS/Azure but needs to access gcloud SDK as well as through "Google cloud client libraries".
what is the best way to setup gcloud authentication from an application that runs outside of GCP?
In my Dockerfile, I have this (cut short for brevity)
ENV CLOUDSDK_INSTALL_DIR /usr/local/gcloud/
RUN curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:$CLOUDSDK_INSTALL_DIR/google-cloud-sdk/bin
RUN gcloud components install app-engine-java kubectl
This container is currently provisioned from an Azure app service & AWS Fargate. When a new container instance is spawned, we would like it to be gcloud enabled with a service account attached already so our application can deploy stuff on GCP using its deployment manager.
I understand gcloud requires us to run gcloud auth login to authenticate to your account. How we can automate the provisioning of our container if this step has to be manual?
Also, from what I understand, for cloud client libraries, we can store the path to service account key json file in an environment variable (GOOGLE_APPLICATION_CREDENTIALS). So this file either has to be stored inside the docker image itself OR has to be mounted from an external storage at the very least?
How safe is it to store this service account key file in an external storage. What are the best practices around this?
There are two main means of authentication in Google Cloud Platform:
User Accounts: Belong to people, represent people involved in your project and they're associated to a Google Account
Service Accounts: Used by an application or an instance.
Learn more about their differences here.
Therefore, you are not required to use the command gcloud auth login to perform gcloud commands.
You should be using gcloud auth activate-service-account instead, along with the --key-file=<path-to-key-file> flag, which will allow you to authenticate without the need of signing into a Google Account with access to your project every time you need to call an API.
This key should be stored securely, preferably encrypted in the platform of your choice. Learn how to do it in GCP here following these steps as an example.
Take a look at these useful links for storing secrets in Microsoft Azure and AWS.
On the other hand, you can deploy services to GCP programmatically either using Cloud Libraries with your programming language of choice, or using Terraform is very intuitive if you prefer to do so over using the Google Cloud SDK through the CLI.
Hope this helped.

Can Amazon Chime be deployed and run on a third-party VPS? And how?

In the examples of Amazon Chime, for instance here https://github.com/aws-samples/amazon-chime-sdk-classroom-demo, they imply that it should be deployed and run on a AWS server via Cloud9. However, I want to deploy and run it on some other VPS such as a digitalocean or linode server.
The main question: can that be done at all, it is supported?
If yes, how? General pointers. Which example should I use and where is it described at all?
Eventually what I want is this:
Say, I have a teaching website that I run on digital ocean or linode. Not on AWS. I want to be able to use Amazon Chime in a way that my users will go to my website and connect to a video class from my website as well
The Chime service would need run on AWS, but you can have a link to the Chime service endpoint from any website hosted anywhere else.
To use the Amazon Chime web application, your students would sign in to https://app.chime.aws/ from your web browser. You would have that link on your website.
See https://docs.aws.amazon.com/chime/latest/ug/chime-web-app.html
Note about the demo. The demo shows how to use the Amazon Chime SDK to build an online classroom in Electron and React. If you are are using that deployment method you can host the React app anywhere under a private domain on any host. That app will run any where, while connecting back to the AWS service endpoint.
Resources would be deployed in AWS. No way around it.
Deployment script can be run from your own laptop, Cloud9 and/or any other Linux server. You just need to be able to run git clone and script/deploy.js.
You'll also need to make that environment is configured with appropriate AWS credentials. Cloud9 would have these credentials out of the box. For any other environment (your laptop/Digital Ocean VM etc.) would need to get AWS Account Ket/Secret pair and use aws config to enable them.

How Can I enable SSO login to Apache Zeppelin on AWS EMR

I created a AWS EMR Cluster using (http://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-launch.html, I chose the application - "Spark: Spark 2.1.0 on Hadoop 2.7.3 YARN with Ganglia 3.7.2 and Zeppelin 0.7.0 while creating the cluster") and I am able to access Apache Zeppelin.
Now I want to give Zeppelin access to a new user using their Gmail or Google SSO or any other login. How can do this? Please point me to any documentation or steps.
*The SAML /SSO logins give access only to AWS console but not the application like Zeppelin which is hosted on the master node.
Zeppelin uses
Apache Shiro
and there are some libraries and examples to use oauth in shiro.
shiro-oauth
Oauth2Relam.java
pac4j security library for Shiro: OAuth, CAS, SAML, OpenID Connect, LDAP, JWT...
But Zeppelin doens't support oauth extensions currently (0.8.0-SNAPSHOT) as far as i know. You might extend Zeppelin by yourself.
Docs: Zeppelin Shiro Configuration for Relam
Single sign-on can be implemented using Apache Knox. KnoxSSO support is recently added to Zeppelin.
For configuration options check out this link

Install custom plugin for Kibana on AWS ElasticSearch Instance

I want to know if it is possible to add a custom plugin for Kibana running on an AWS instance as mentioned in this link.
From the command line we can type,
bin/kibana-plugin install some-plugin
But, In case of AWS ElasticSearch Service, there is no command prompt/terminal as it is just a service and we don't get to SSH to it. We just have the management console. How to add a custom plugin for kibana in this scenario then?
From the AWS Elasticsearch Supported Plugins page:
Amazon ES comes prepackaged with several plugins that are available
from the Elasticsearch community. Plugins are automatically deployed
and managed for you.
The page referenced above has all the plugins supported on each ES version.
Side note, Kibana is installed and fully managed, but it runs as a Node.js application (not as a plugin).