Zeppelin on AWS EC2 (Ubuntu instance) - amazon-web-services

I installed Apache Spark and Zeppelin on an Ubuntu instance running on AWS. Zeppelin is starting fine and when I check the status it is OK:
sudo ./bin/zeppelin-daemon.sh status
Zeppelin is running [ OK ]
But I can not use the "ip address":8090 to see the zeppelin webpage and create notebooks. The IP address I am using is the public one AWS give me (and I changed the port to 8090 in zeppelin-site.xml).
Should I change the server address in the zeppelin-site.xml?

Zeppelin is indeed running on port 8090 on that server, but the port is not accessible externally -- that is, it can only be accessed from the server itself.
No fear! You can use port forwarding to connect to it.
Linux
ssh -i keypair.pem -L 8090:localhost:8090 user#<IP-ADDRESS>
This tells SSH to forward any requests sent to port 8090 on the local computer to the remote machine's localhost:8090. Therefore, you can access Zeppelin via localhost:8090 on your computer.
Here's an example of it in use: Big Data: Amazon EMR, Apache Spark and Apache Zeppelin – Part 2 of 2
PuTTy
If you are using PuTTY to connect to the host, there is a similar command in the Tunnels configuration screen. Redirect 8090 to localhost:8090 on the remote machine.

Related

"It was not possible to connect to the redis server" Bitnami hosted on Google Cloud

Problem:
When I try to connect from my local machine to a Redis VM hosted on Google Cloud, the connection is refused.
QUESTION: How can I connect to Redis installed on a private VM in GCP after successfully connecting to the VPC via a VPN?
Setup:
VM hosted in Google Cloud without public IP
Redis installed on VM by deploying the VM using the Bitnami package from GCP Marketplace
Firewall rule added to GCP targeting my VM, allowing TCP ingress traffic on port 6379 for all IP ranges
VPN setup using OpenVPN to tunnel into GCP VPC from local machine (Windows)
What I know:
Redis is running... if I SSH into my VM and run redis-cli, everything works as expected
VPN is working... from my local machine, I can successfully ping my VM when connected to the VPN
Redis config comments out the binding to 127.0.0.0 to (theoretically) open it up to all bindings, after which I restarted Redis on my VM (I think)
The password I'm using is correct
What doesn't work:
StackExchange.Redis:
var redis = ConnectionMultiplexer.Connect("my.ip.to.vm", config =>
{
config.Password = "my-redis-password";
});
Command Line from Local Machine (using redis-cli npm package):
rdcli -h my.ip.to.vm -a my-redis-password
Recap
What am I missing?
How can I connect to Redis installed on a private VM in GCP after successfully connecting to the VPC via a VPN?
Solution (this is embarrassing):
Stop VM
Start VM

Accessing localhost of GCP instance from local machine

I am trying to run my flask app on GCP instance. However the app gets published at local host of that instance. I want to access that instances localhost.
I saw couple of videos and article but almost all were about deploying app on GCP. Is there no simple way to just forward whatever is published on localhost of VM instance to my PC browser and If I submit some information in the app then it goes to VM instance and gives back the result to my local machine's browser via VM instances localhost.
You can use Local Port Forwarding when you ssh into the target instance hosted in GCP.
Local port forwarding lets you connect from your local machine to another server. To use local port forwarding, you need to know your destination server, source port and target port.
You should already know your destination server. The target port must be the one on which your flask app is listening. The source port can be any port that is not in use on your local computer.
Assuming flask app is listening on port 8080 on the GCP instance and you want to make the app available in your local computer on port 9876, ssh into your GCP instance using the following command:
ssh -L 9876:127.0.0.1:8080 <username>#<gcpInstanceIP>
Same result can be achieved using gcloud compute ssh if you don't have the ssh key on the target instance.
The -- argument must be specified between gcloud specific args on the left and SSH_ARGS on the right:
gcloud compute ssh <gcp-instance-name> --zone=<instance-zone> -- -L <source-port>:localhost:<target-port>
You can also use the Google Cloud Shell:
Activate Cloud Shell located at the top-right corner in the GCP Web Interface
ssh into your instance with Local Port Forwarding
gcloud compute ssh <gcp-instance-name> --zone=<instance-zone> -- -L 8080:localhost:<target-port>
Click the Web Preview in the Google Cloud Shell, the Preview on port 8080.

SSH Port forwarding / Tunneling with multiple hops

Background
Three subnets exist in an AZ in AWS. Two of them are private and one is public.
The Public Subnet has a Jumpbox which can be connected to from my local machine via ssh using a pem file (Sample - ssh -i my-key-file.pem ec2-user#host1
The First private subnet has an EC2 Instance that acts as a Application Server. It can only be reached from the Jumbox via ssh. The same pem file is used here. (Sample - ssh -i my-key-file.pem ec2-user#host2). This command is executed on host1.
The second private subnet hosts an Oracle Instance using AWS RDS Service. It is running on port 1521. The DB Can only be accessed from the App Server/host2.
How I am working currently
host2 has sqlplus client installed.
First,I connect to host1, then to host2, and then execute sqlplus to execute Queries at the Command line (No GUI).
I am planning to use GUI tools like SQL Developer to connect right from my local machine. I thought using Port forwarding/SSH Tunneling It can be achieved.
I tried using different options, but with no success. The following links are useful:
https://superuser.com/questions/96489/an-ssh-tunnel-via-multiple-hops
https://rufflewind.com/2014-03-02/ssh-port-forwarding
My Approach to SSH Tunneling
ssh -N -L 9999:127.0.0.1:1234 ec2-user#host1 -i my-key-file.pem -v -v -v
This is executed on my local machine.
It does not do much as I can already connect to host1 using ssh. I did not know how to forward many levels. Using this host as my first hop. After this, ssh listens on port 9999 which is Local to my machine. It forwards any traffic to host1 to Port 1234. My assumption is that, If I use sqlplus on my local machine connecting to localhost:9999, the traffic will arrive at host1:1234
I used 127.0.0.1 because the target of SSH tunneling is with respect to the SSH Server, which is host1. Basically, Both Target, SSH Server are on the same host.
ssh -N -L 1234:db-host:1521 ec2-user#host2 -i my-key-file.pem -v -v- v
This is executed on host1
After this, ssh forwards any incoming traffic on port 1234 to target host (DB Host)/1521 using host2 as the Tunnel.
Again, my assumption is that, ssh is listening on port 1234 on host1. Any traffic arriving from anywhere will be delivered to DB Host using host2 as the tunnel.
I executed both commands and did not see any error. I verified which ports are listening using netstat -tulpn | grep LISTEN.
After these two, My plan was to connect to the Database using Hostname as localhost and Port number as 9999.
What's going wrong !
But when I try to connect to the DB from my local machine, getting an error from my SQL Client Got minus one from a read call. I could not understand the Debug messages from ssh logs.
I believe my understanding of how port forwarding works might not be right. Any inputs would be helpful.
Thanks for your time !

Airflow integration with AWS development machine to access admin UI

I am trying to use Airflow for workflow management on my development machine on aws. I have multiple virtual environments setup and have installed airflow.
I am listening to port 8080 in my nginx conf as:
listen private.ip:8080;
I have allowed inbound connection to port 8080 on my AWS machine.
I am unable to access my airflow console as well as admin page from my public ip / website address.
You can just create a tunnel for viewing UI locally.
ssh -N -L 8080:ec2-machineip-compute-x.amazonaws.com:8080 YOUR_USERNAME_FOR_MACHINE#ec2-machineip-compute-x.amazonaws.com:8080
ssh -N -L 8080:ec2-machineip-compute-x.amazonaws.com:8080 YOUR_USERNAME_FOR_MACHINE#ec2-machineip-compute-x.amazonaws.com:8080
localhost:8080 for viewing airflow 8080 UI

How to run sonatype nexus on aws ec2?

I need to put sonatype nexus3 up on AWS. Following an old tutorial for nexus 2, I was led to try this on EC2. What I'm currently trying is an instance with a security group that allows inbound requests from anywhere on ports 80,8080,22,4000,443, and 8081. I'm using a Amazon Linux AMI 2016.09.0 (HVM), SSD Volume Type instance. I install docker using the instructions from here http://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html#install_docker. I then simply use the official docker image from here https://hub.docker.com/r/sonatype/nexus3/ with the following command.
docker run -d -p 8081:8081 --name nexus sonatype/nexus3
Using docker ps I can confirm that this seems to be running. When I try to connect to the provided public DNS url ending with amazonaws.com on port 8081, I simply get connection refused. Same thing on port 80 or any of the other ports and the same thing when I add /nexus to the end of the URL.
Attempting the quick test that documentation for this image suggests:
>curl -u admin:admin123 http://localhost:8081/service/metrics/ping
curl: (56) Recv failure: Connection reset by peer
Using the exact same docker command on my local machine (OS X) I am able to access nexus on localhost. Why can't I get this working?
The issue appears to have been with Sonatype's official image. This image which works the exact same way, works perfectly with the exact same process.