AWS Server Migration Service Hyper-V Connector Unhealthy - amazon-web-services

I am trying to migrate servers from Hyper-V to AWS using the AWS Server Migration Service. I have followed the instructions at: https://docs.aws.amazon.com/server-migration-service/latest/userguide/HyperV.html
The Connector server VM was created (V1) and seems to be functioning correctly. The connector is on the same network as the Hyper-V Host machine. All firewalls on the Hyper-V host have been disabled; the Hyper-V network is designated as Private on the Hyper-V Host.
In the web interface, the Connector server shows:
AWS Server Migration Service
--------
AWS Region: us-west-2
AWS Service Endpoints: sms.us-west-2.amazonaws.com
Connector ID: x-xxxxxxxxxxxxxxxxx
VM Manager Account xxxxxxxxxx
VM Manager Hostname(s) Microsoft® Hyper-V View info
VM Manager Type Microsoft® Hyper-V
General Health
--------
AWS access key ID: XXXXXXXXXXXXXXXXXXXX
AWS connectivity: [GREEN CHECK]
Connector registered IP address is current: [GREEN CHECK]
Connector up-to-date: [GREEN CHECK] Version: 1.1.0.304
IAM user ARN: arn:aws:iam::134514015789:user/xxxxxxxxxx
Poller Service: [GREEN CHECK]
System time synchronization: [GREEN CHECK]
When the Connector server is initially connected to AWS SMS, the SMS Console indicates that the Connector server is Healthy. However, when any attempt is made to sync the servers, it fails. Eventually, the Connector server indicates that it's status is Unhealthy.
There does not appear to be any way to determine what errors have occurred from the AWS Console. Is there a log location that I am missing that I can begin to diagnose this issue?
I have tried reconnecting the Connector Server, and it did not help. I have performed a factory reset on the Connector Server and performed all of the configuration steps again, and it did not help.

The Connector server eventually indicated the problem - there was insufficient memory on the Connector server (4096MB). I increased the memory to 5120MB and the issue was resolved.

Related

Migrate Connector activation failed (vSphere to GCP) javax.net.ssl.SSLHandshakeException

I'm trying to perform migration VM from vCenter to GCP.
Unfortunately, I've encounter with error on Migration Connector on vCenter:
**Migrate Connector activation failed** Error: Migrate Connector "projects/XXXXXX/locations/europe-west1/sources/migrate-vsphere/datacenterConnectors/migrate-vsphere-ptwcvj" for source "migrate-vsp here" connected to vCenter "XXX.XXX.XXX.XXX" failed due to an error when connecting to vCenter. Details:** HTTP transport error: javax.net.ssl.SSLHandshakeException: The server selected protocol version TLS10 is not accepted by client preferences [TLS12]**
Components:
VM Host
vCenter Server 5 Essentials
SSL.Version = All
config.vpxd.sso.solutionsUser.certification=/etc/vmware-vpx/ssl/rui.crt
config.vpxd.ssl.CAPath=/etc/ssl/certs
vSphere Client 5.1
VM
Windows Server 2019 Standard
Project in GCP
User Account with roles:
Create Service Account
Owner
Service Account Key Admin
VM Migration Administrator
Dedicated GCP service account created through Migration Connector VM interface.
M4C status:
`Appliance connectivity and health:
Migrate Connector appliance health: Healthy
Appliance version: 2.1.1943
Proxy setting: not enabled
DNS: Automatic
Gateway: 0.0.0.0
Cloud APIs network connection: Reachable
VM Migration service:
Registered with VM Migration service: True
Connectivity to VM Migration service: True
Project: XXXXXX
Location: europe-west1
Source name: migrate-vsphere
Data center connector name: migrate-vsphere-uajfqq
Migrate Connector service account: vsphere2gcp#XXXXXXX.iam.gserviceaccount.com
On-Prem environment:
Appliance not connected to vSphere`
Anyone has idea how to fix it or where to find solution?
Please help
If anyone has encounter with such problem, please tell me how I can solve this problem.
Please help

Connecting to on-prem kafka cluster from cloud AWS using Kerberos auth

Is it possible to connect to on-prem kafka cluster using Kerberos authentication from cloud deployed service.
When we are trying to connect we are getting below error:
Caused by: KrbException: Generic error (description in e-text) (60) - Unable to locate KDC for realm "ABC.COM"
this is my jaas config:
com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/pathtokeytab" principal="principal_name#ABC.COM" ;
Please help me if anyone has faced such error.
From this link under heading Network connectivity to Kerberos, they say its challenging to connect to on-prem kafka server from cloud deployed services. Is it unachievable or requires some other configs:
https://blog.cloudera.com/how-to-configure-clients-to-connect-to-apache-kafka-clusters-securely-part-1-kerberos/

Independent azure Web job is not able to connect to SQL Server hosted in an Azure VM

Independent Azure Web job is not able to connect to SQL Server hosted in an Azure VM.
But we are able to connect to the same SQL SERVER from our local computers.
Error details :
The underlying provider failed on Open.
The job failed with exception :
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - No such host is known.)
Is the webjob able to connect to the SQL DB hosted in Azure VM now? Also, is the Azure VM hosted in west Europe? If yes, you might have received a message on the Azure portal and Service Health Dashboard (banner).

Cannot Connect by Cloud SQL Proxy from Cloud Shell By Proxy

I am following the Django sample for GAE and have problem to connect to Cloud SQL instance by Proxy from Google Cloud Shell. Possibly related to permission setting since I see the request not authorized,
Other context,
"gcloud beta sql connect auth-instance --user=root" has no problem to connect.
I have a service account for SQL Proxy Client.
I possibly miss something. Could someone please shed some light? Thanks in advance.
Thanks in advance.
Proxy log:
./cloud_sql_proxy -instances=auth-158903:asia-east1:auth-instance=tcp:3306
2017/02/17 14:00:59 Listening on 127.0.0.1:3306 for auth-158903:asia-east1:auth-instance
2017/02/17 14:00:59 Ready for new connections
2017/02/17 14:01:07 New connection for "auth-158903:asia-east1:auth-instance"
2017/02/17 14:03:16 couldn't connect to "auth-158903:asia-east1:auth-instance": dial tcp 107.167.191.26:3307: getsockopt: connection timed out
Client Log:
mysql -u root -p --host 127.0.0.1
Enter password:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
I also try with credential file but still no luck,
./cloud_sql_proxy -instances=auth-158903:asia-east1:auth-instance=tcp:3306 -credential_file=Auth-2eede8ae0d0b.jason
2017/02/17 14:21:36 using credential file for authentication; email=sql-proxy-client#auth-158903.iam.gserviceaccount.com
2017/02/17 14:21:36 Listening on 127.0.0.1:3306 for auth-158903:asia-east1:auth-instance
2017/02/17 14:21:36 Ready for new connections
2017/02/17 14:21:46 New connection for "auth-158903:asia-east1:auth-instance"
2017/02/17 14:21:48 couldn't connect to "auth-158903:asia-east1:auth-instance": ensure that the account has access to "auth-158903:asia-east1:auth-instance" (and make sure there's no typo in that name). Error during get instance auth-158903:asia-east1:auth-instance: googleapi: **Error 403: The client is not authorized to make this request., notAuthorized**
I can reproduce this issue exactly if I only give my service account "Cloud SQL Client" IAM role. When I give my service account the "Cloud SQL Viewer" role as well, it can then connect. I suggest you try this and see if it helps.
It looks like a network connectivity issue.
Read this carefully if you use a private IP :
https://cloud.google.com/sql/docs/mysql/private-ip
Note that the Cloud SQL instance is in a Google managed network and the proxy is meant to be used to simplify connections to the DB within the VPC network.
In short: running cloud-sql-proxy from a local machine will not work, because it's not in the VPC network. It should work from a Compute Engine VM that is connected to the same VPC as the DB.
What I usually do as a workaround is use gcloud ssh from a local machine and port forward over a small VM in compute engine, like:
gcloud beta compute ssh --zone "europe-north1-b" "instance-1" --project "my-project" -- -L 3306:cloud_sql_server_ip:3306
Then you can connect to localhost:3306 (make sure nothing else is running or change first port number to one that is free locally)
The Cloud SQL proxy uses port 3307 instead of the more usual MySQL port 3306. This is because it uses TLS in a different way and has different IP ACLs. As a consequence, firewalls that allow MySQL traffic won't allow Cloud SQL proxy by default.
Take a look and see if you have a firewall on your network that blocks port 3307. To use Cloud SQL proxy, authorize this port for outbound connections.

WSO2 Kuberentes AWS deployment

Here is the issue I am encountering.
I am trying to deploy the WSO2 API Manager which is open source.
Can find the documenation on how to do this here:
https://github.com/wso2/kubernetes-artifacts/tree/master/wso2am
Dockerfiles:
https://github.com/wso2/dockerfiles/tree/master/wso2am
What I did was take the build the docker images which is required for kuberenetes.
I than take these docker images and deploy them to EC2 Container Service.
I than update the wso2 kuberenetes spec files (controllers) to use the image I pushed to EC2 Container Service.
I then go into kubernetes:
kubernetes-artifacts/wso2am and run "./deploy -d"
It than runs the wait for launch script but it just keeps looping and never "finds" that it is up.
root#aw-kubernetes:~/wso2kubernetes/kubernetes-artifacts/wso2am# ./deploy.sh -d
Deploying MySQL Governance DB Service...
service "mysql-govdb" created
Deploying MySQL Governance DB Replication Controller...
replicationcontroller "mysql-govdb" created
Deploying MySQL User DB Service...
service "mysql-userdb" created
Deploying MySQL User DB Replication Controller...
replicationcontroller "mysql-userdb" created
Deploying APIM database Service...
service "mysql-apim-db" created
Deploying APIM database Replication Controller...
replicationcontroller "mysql-apim-db" created
Deploying wso2am api-key-manager Service...
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:32013,tcp:32014,tcp:32015) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "wso2am-api-key-manager" created
Deploying wso2am api-store Service...
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:32018,tcp:32019) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "wso2am-api-store" created
Deploying wso2am api-publisher Service...
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:32016,tcp:32017) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "wso2am-api-publisher" created
Deploying wso2am gateway-manager Service...
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:32005,tcp:32006,tcp:32007,tcp:32008) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "wso2am-gateway-manager" created
Deploying wso2am api-key-manager Replication Controller...
replicationcontroller "wso2am-api-key-manager" created
Waiting wso2am to launch on http://172.20.0.30:32013
.......
I tried to comment out the "/wait-until-server-starts.sh" script and have it just start everything. But still not able to access the API Manager.
Could really use some insight on this as I am completely stuck.
I have tried everything I can think of.
If anyone on the WSO2 team or that has done this could help out it would really be appreciated.
My theory right now is maybe this was never tested deploying this to AWS but only to a local setup? but I could be wrong.
Any help would be greatly appreciated!
EDIT:
Adding some outputs from kubectl logs etc while it is in the loop waiting for server to come up I see these things:
root#aw-kubernetes:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-apim-db-b6b0u 1/1 Running 0 11m
mysql-govdb-0b0ud 1/1 Running 0 11m
mysql-userdb-fimc6 1/1 Running 0 11m
wso2am-api-key-manager-0pse8 1/1 Running 0 11m
Also doing a kubectl logs shows that everything started properly:
[2016-07-21 18:46:59,049] INFO - StartupFinalizerServiceComponent Server : WSO2 API Manager-1.10.0
[2016-07-21 18:46:59,049] INFO - StartupFinalizerServiceComponent WSO2 Carbon started in 34 sec
[2016-07-21 18:46:59,262] INFO - CarbonUIServiceComponent Mgt Console URL : https://wso2am-api-key-manager:32014/carbon/
[2016-07-21 18:46:59,262] INFO - CarbonUIServiceComponent API Publisher Default Context : http://wso2am-api-key-manager:32014/publisher
[2016-07-21 18:46:59,263] INFO - CarbonUIServiceComponent API Store Default Context : http://wso2am-api-key-manager:32014/store
#Alex This was an issue in WSO2 Kubernetes Artifacts v1.0.0 release. We have fixed this in the master branch [1].
The problem was that the deployment process was trying to verify WSO2 API-M server sockets using private IP addresses of the Kubernetes nodes. We updated the scripts to use the public/external IP address if they are available via the Kubernetes CLI. For this to work, you may need to setup Kubernetes on AWS according to [2].
[1] https://github.com/wso2/kubernetes-artifacts/commit/53cc6979965ebed8800b803bb3454f3b758b8c05
[2] http://kubernetes.io/docs/getting-started-guides/aws/