Not able to connect to Hue web UI from Cloudera Manager - google-cloud-platform

I have installed Cloudera Express on Google cloud platform vm instance using Cloudera Manager.All the web services are running but web UI links of all services (Hue, Hbase, Spark etc.) are not loading.I can't reach the login page also .It gives the error as given below:
This site can’t be reached
instance-1.c.cluster-183105.internal’s server DNS address could not be found.
DNS_PROBE_FINISHED_NXDOMAINpls find the screenshot of error here

You're configurations in Cloudera Manager are pointing to internal DNS records.
So, for example, Cloudera Manager does a health check against the web server, and says they are reachable because it is also on that internal network.
Your machine cannot resolve the internal DNS records unless you tunneled into that network.
The solution here is to
whitelist your network in GCP so that you can reach the cluster without exposing it to the open internet
You must go through each component and change the web interface address to be the public DNS
Or setup and use an SSH tunnel or VPN to any machine in the cluster
Note, Cloudera Director and Cloudera Atlus are two offerings for running and configuring CDH in the cloud

Related

I just Need to create centralized DNS/LDAP for 3 cloud provider AWS, GCP, Alibaba

I Need to create a centralized DNS server or LDAP for All the cloud platform ..AWS, GCP, Alibaba.
Need a tools name and what is the approach to get this done?
If you run DNS and LDAP servers in real VMs you can use any DNS server and any LDAP server product provided the products support replicating the data over TCP and you set up network connection between all the different public cloud deployments.

proxy(?) server for connecting to cloud sql instance (GCP)

I have a postgresql database on the google cloud platform (cloud SQL). I'm currently managing this database through pgadmin, installed on my laptop. I've added the IP address of my laptop to the whitelist on the cloud sql settings page. This all works.
The problem is: when I go somewhere else and I connect to a different network, the IP address changes and I cannot connect to the postgresql database (through pgadmin) from my laptop.
Is there someone who knows a (secure) solution, involving a proxy server (or something else), to connect from my laptop (and only my laptop) to my postgresql database, even if I'm not on a whitelisted network (IP address)? Maybe I can set up a VM instance and install a proxy server and use this? But I have no clue where to start (or search for).
You have many options for connecting to a Cloud SQL instance from an external applications such a Public IP address with SSL, Public IP address without SSL, Cloud SQL proxy, etc. You can see all of them here.
Between all connection options there exists Cloud SQL Proxy, it basically provides secure access to your instances without the need for Authorized networks or configuring SSL on your part.
You only need to follow the steps listed here and you will be able to connect your Cloud SQL instance using the proxy.
Enable Cloud SQL Admin API on your console.
Install the proxy client on your local machine (Linux):
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
Determine how you will authenticate the proxy. You can use use a service account or let Cloud SDK take care of the authentication.
However, if required by your authentication method, create a service account.
Determine how you will specify your instances for the proxy. Your options for instance specification depend on your operating system and environment
Start the proxy using either TCP sockets or Unix sockets.
Take note that as of this writing, Cloud SQL Proxy does not support Unix sockets on Windows.
Update your application to connect to Cloud SQL using the proxy.

How connect a client to a remote Windows Server 2019 AWS EC2

We have a very difficult problem here, we have a Windows Server 2019 Base x64 on Amazon EC2, connected through RDP and setup-ed forest and activated AD DS , also activated DNS. But whenever we try to connect we are not allowed to.
We have opened all the relevant ports on inbound traffic rules.
We have added users.
We have tried searching internet and various tutorials.
In Server Manager=:
Added the public ipv4 address to our ipv4 settings of the adapter.
Went to the computer setting in computer domain entered the domain but no fun.
Disabled the firewall in server manager.
We want to connect our clients on different network to connect to the server hosted else-where on AWS.
We are really new into this can some one guide through this?
Please make sure there is network connectivity between your client and you DC which is set up on EC-2 Instance.
[1] In case your clients are on AWS (meaning different EC-2 Instances), and in a different network, you need to create VPC peering or use Transit Gateway, so that it has proper network connectivity.
[2] In case your clients are not on AWS, and in an On-prem Environment, you need to have a VPN connection between your client and your DC.
So in Summary, you need to have network connectivity between your client and DC so that clients can join your Domain.
What do you mean whenever we try to connect we are not allowed to?
What are you trying to connect to, the Windows EC2 instance?
Are you saying that the instance is joined to AWS Directory Service domain but you can't connect to the instance using one of the users in your AWS directory?
Edit: This should have been a comment but couldn't post comments at the time of answering.

Clustering Eureka Servers in Google Cloud

We are using Spring Cloud Netflix Eureka for Service Registration. We will be deploying all microservices in GCP (Google Cloud).
Environment
We have Eureka Servers running as a cluster.
Eureka Server registers themselves as client to its peer in application.properties
eureka.client.service-url.default-zone=http://xx.xx.xx.xxx:8762/eureka
Client microservices register/enroll themselves by
providing Eureka Server IPs in application.properties
eureka.client.service-url.default-zone=http://xx.xx.xx.xxx:8761:/eureka,http://xx.xx.xx.xxx:8762:/eureka
Since IP Address and hostnames are dynamic in cloud, can we configure Eureka Servers in cluster without using ipaddress/hostname.
Please provide a sample confiugration to use in Google Cloud.
gcloud maintains internal DNS resolver for subnets (if you are using default OS images).
So you can use host names to resolve IP addresses. Like prod-redis-2.c.project-<id>.internal.
You may probably need to configure links between subnets to avoid making IP addresses public.
I have not used GCP but have implemented and deployed spring cloud on PCF (which, on a higher level, is pretty much same as GCP).
You cannot make defaultZone completely dynamic. Why? Because these propeties are picked up during the application startup.
There needs to be something (some service or database) in your architecture that tells your services the dynamic hostnames/IP-addresses of other services. That is Eureka server in your case. All services needs to know the address (hostname/IP-address) of Eureka service. Now if Eureka server's hostname is dynamic, then how will your services know about the new hostname of Eureka server when that hostname changes?
You'll have to update the address of Eureka server manually only. What, at max, you can do is externalize defaultZone to a centralized configuration server (or something similar). That way you'll have to update the new address at one place only.

Troubles with deploying Pivotal Cloud Foundry on AWS

I have been trying to install Pivotal Cloud Foundry on AWS and I have troubles with it.
In the section upload-cert mentioned that I need to create SSL Certificates for:
*.system.example.com
*.login.system.example.com
*.uaa.system.example.com
*.apps.example.com
So, I've created domain xxxxx.com on AWS Route53 and created a certificate on AWS ACM for domain and subdomains.
So, my questions are:
do I need to create subdomains (system, login, uaa, apps) in AWS Route53
do I need to bound my domain and subdomain somehow to PCF? Or the installation process had to do it for me?
for now, if I open http://login.xxxxx.com/ it responses with 503. what can be the reason?
what is the correct url to open the PCF UI?
I have such error in Ops Manager. What can be the reason of such error?
The same about logs. When I tried to download logs for failed services it failed too. What can be the reason?
Thank you for the help!
do I need to create subdomains (system, login, uaa, apps) in AWS Route53
do I need to bound my domain and subdomain somehow to PCF? Or the installation process had to do it for me?
You can create a wildcard subdomain (*.xxxxx.com) and alias using the instructions here: https://docs.pivotal.io/pivotalcf/1-10/customizing/cloudform-er-config.html#cname
what is the correct url to open the PCF UI?
If you mean Ops Manager, it is whatever DNS entry you created and pointed to the Ops Manager public IP address in this step: https://docs.pivotal.io/pivotalcf/1-10/customizing/cloudform-om-deploy.html#create-dns
For the ERT UI, there is the Pivotal Apps Manager https://docs.pivotal.io/pivotalcf/1-10/console/index.html
which is usually apps.system.xxxx.com
You can see what system apps are deployed by connecting to Cloud Foundry using the CLI and seeing which apps are in the system org, and what their routes are.
for now, if I open http://login.xxxxx.com/ it responses with 503. what can be the reason?
If the DNS has not been set up, I'm surprised you're getting any response whatsoever. Usually you get 503s when the routers connected to the load balancers are failing for some reason (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/ts-elb-error-message.html#ts-elb-errorcodes-http503)
I have such error in Ops Manager. What can be the reason of such error?
This would explain the 503s if the router is unhealthy. I would SSH into those machines and see what the logs say (in /var/vcap/sys/logs), which should tell you what is going wrong.
The reason of the red instances on the Status page was that my AWS account had limit on number of instances and it failed to create VMs for this nodes.
To find more information open Changelog (https://ops_manager_host/change_log) and the open log of the FAILED setup.