Alert on failed links for inbound port for a specific process on a VM - azure-virtual-machine

Let's say I have an Azure VM and there's a process called ABC.exe and it listens on port 34952. I want to monitor this port and perform some sort of health probe check for it. If it goes down, I want to be alerted.
I looked into using Log Analytics Workspace, as you can create an Alert rule for it. Something like this:
VMConnection
| where Direction == "inbound"
| where ProcessName == "ABC.exe"
| where DestinationPort in (34952)
| where LinksFailed > 0
The problem is, the "LinksFailed" metric is only available for Outbound connections, not Inbound. This is documented here - https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/vmconnection
Otherwise, the above works well for identifying any failed links for specific ports and their processes.
Is there another option I can use? I'm trying not to implement any solutions at the VM guest level. Ideally, if this can be done at the PaaS level, that would be great.

• You do not have the option of probing the inbound port for a specific process in a VM with regards to ‘LinksFailed’ physical connection property in the ‘Connection Monitor’ workspace because it doesn’t support that feature. Therefore, to create a health probe check for the said port, i.e., 34952, you can use the ‘Network watcher’ extension in this regard.
For this purpose, you will need to enable the ‘Network Watcher’ extension for the Virtual Machine that you want to monitor the inbound port for. Then, go to the ‘Network Watcher’ utility in Azure portal and select the ‘Connection Monitor’ as shown below: -
• Then click on ‘Create’ tab to create a connection monitor workspace by giving an appropriate name and selecting the same region as your VM. Then, in ‘Add sources’ section, select the appropriate Azure or Non-azure endpoint in your resource group or subscription and select the appropriate subnet to select a specific endpoint from where the resource tries to connect to the process ABC.exe on port 39452 in it as below: -
Once done, then in the ‘Add Test Configuration’ section, create a new configuration for testing the traffic on port 39452 by selecting it as ‘Destination port’. In this, select the protocol as ‘TCP’ and check the box for ‘Listen on port’. This will ensure that the network watcher will probe the incoming network traffic on port 39452 and generate an alert if the link check fails according to the threshold configured under ‘Success Threshold (Checks failed %)’, thus ensuring that you are alerted when the link for the process ‘ABC.exe’ is down or failed.
Please find the below snapshot for your reference: -
Finally, select the destination endpoint as your VM for which you want to monitor the traffic on inbound port 39452 for ABC.exe process running on it. This will ensure that the link created for probing the connection link failure on this port is operational and you will receive alerts regarding its failure as expected.

Related

Kubernetes: How to connect one pod to another on an arbitrary port - with or without services?

We are currently transitioning our apps to Kubernetes and I have two apps, appP and appH, that I need to communicate with each other over a port unknown at start up time.
Unlike most of our apps, we don't have a set port for them will to communicate over. Before Kubernetes, third party app (out of my control) would tell appP to start processing an item, itemA, identified with a unique id and it would also tell appH to handle the processed data produced by appP.
To coordinate communications between appP and appH, appH would generate a port based on the unique id and publish the host and port info to connect on to an intermediate app (IA). appP, once done with it's processing queries IA for the connection information based on the unique id and sends it over.
Now we have to adapt this to kubernetes. Each app runs in its own deployment, as does the IA. So how can I setup appH to accept the connection over a port without being able to specify it in the service definition?
Note: I've seen some posts say that pods should be able to communicate to any other pods in the cluster regardless of specifying the ports in the service definition but I can't seem to find a ton of confirming information on this and I don't have a ton of time on our cluster where it is free to bang my head against.
Would it would just fine as is regardless? My biggest worry is the ip resolution. Currently appH grabs its ip based on the host it's running on (using boost). Not sure how this resolves within a container.
If not, my next thought would be if I could setup a headless service with selector for appH in order to allow for ip resolution. What I am unsure of then is if I could have appP connect to <appH_Service>:<arbitrary_port>?
Would the service even have to be headless in this scenario? I mostly say headless w/ selector because I saw in one specific post that it is the only one you don't need a port in the spec for it. Also because I am unsure if the connection would go through unless it was the actual pod's ip it was connecting with, rather than the services.
Any info or clarification is appreciated. For the most part, I can't really change the architecture of these apps right now, I just have to get them talking to each other as is and haven't found a ton of clear information on this type of case.
Note: We use helm and coredns if anyone is curious.
The Kubernetes networking model is as follows: a Pod is a group of containers that share a single network identity (a cluster IP). Any port exposed by a container is thus automatically exposed on the Pod. The model demands that each Pods can communicate with other Pods.
This means that your current design can work without modifications.
What Services bring to the table is that you can bring a stable network identity to a group of Pods that is otherwise very volatile. It does not apply to your appP/appH coupling, I think.

How to connect to a Google Cloud SQL database from Metabase (when Metabase is running on Google Cloud Run)

I am running Metabase on Google Cloud Run and am trying to connect to a MySQL database instance (which also resides in Google Cloud SQL, in the same project).
NB. This is not Metabase's application database, but rather connecting a database to perform analysis on the data as per https://www.metabase.com/docs/latest/setting-up-metabase.html
When I run Metabase locally, I can connect fine using the Public IP (once my IP was whitelisted)
I cannot connect via Metabase on Cloud Run.
I have added the database to Cloud SQL connections within Cloud Run (per: https://cloud.google.com/sql/docs/mysql/connect-run)
The database I'm trying to connect to is a Read Replica (if that makes any difference)
When I 'allow all' on the Cloud SQL instance using 0.0.0.0/0 I am able to connect using the Public IP. Once I remove this rule I cannot connect.
I understand Cloud Run does not yet support Cloud SQL Private IPs
For connecting to the database I am constrained to using Metabase's web interface:
Within this interface I have tried:
Setting Host to the public IP
Setting the Additional JDBC connection string options to cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.mysql.SocketFactory as per https://github.com/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory (with <INSTANCE_CONNECTION_NAME> replaced with the real name)
When I set this the error is: Could not connect to address=(host=<HOST>)(port=3306)(type=master) : Socket fail to connect to host:<HOST>, port:3306. Socket factory failed to initialized with option "socketFactory" set to "com.google.cloud.sql.mysql.SocketFactory" (I have redacted the real <HOST> value)
Thoughts:
From what I now understand, when I set socketFactory in Additional JDBC connection string options the host is ignored.
I can only assume I have not properly formatted or configured the Additional JDBC connection string options field
Any help would be greatly appreciated.
You cannot connect to CloudSQL thorugh any TCP connections as it is stated in the documentation:
Cloud Run (fully managed) does not support connecting to the Cloud SQL instance using TCP. Your code should not try to access the instance using an IP address such as 127.0.0.1 or 172.17.0.1.
You can connect to the CloudSQL instance, using the instance connection name using a code similar to this:
// The configuration object specifies behaviors for the connection pool.
HikariConfig config = new HikariConfig();
// Configure which instance and what database user to connect with.
config.setJdbcUrl(String.format("jdbc:mysql:///%s", DB_NAME));
config.setUsername(DB_USER); // e.g. "root", "postgres"
config.setPassword(DB_PASS); // e.g. "my-password"
// For Java users, the Cloud SQL JDBC Socket Factory can provide authenticated connections.
// See https://github.com/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory for details.
config.addDataSourceProperty("socketFactory", "com.google.cloud.sql.mysql.SocketFactory");
config.addDataSourceProperty("cloudSqlInstance", CLOUD_SQL_CONNECTION_NAME);
config.addDataSourceProperty("useSSL", "false");
// ... Specify additional connection properties here.
// ...
// Initialize the connection pool using the configuration object.
DataSource pool = new HikariDataSource(config);
[1]: https://cloud.google.com/sql/docs/mysql/connect-run
If you are concerned about the security of the connection you can always choose to connect to the CloudSQL proxy, using the JDBC socket factory. Please note that in this situation your JDBC URL should look like this :
jdbc:mysql:///<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.mysql.SocketFactory&useSSL=false&user=<MYSQL_USER_NAME>&password=<MYSQL_USER_PASSWORD>
Try to connect using the full URL and review it a few times. I've see this identical error when people did a little mistake in the JDBC Url, like an extra semicolon, colon etc..
You can't use Cloud SQL private IP with Cloud Run, it's not yet compliant, you can forgot this way. Read replica will change nothing, you can also save money and delete it!
First, allow you Cloud SQL database to be reached by 0.0.0.0/0 network. Like this, you can validate that your Cloud Run container work correctly with a database open to internet.
Then, delete this configuration and retry to follow this page for connecting your MySQL instance to Cloud Run as you did. In your application, you also have to add a dependency (maven or gradle) for getting the socket factory jar.
It should work. Share more code or configuration for being able to help you more!
2 years later, and I think perhaps conditions have changed. I was able to do this on Google App Engine, using the beta_settings configuration option in the app.yaml definition file.
For example
beta_settings:
cloud_sql_instances:
- <Project><Zone><Metabase_backend_postgresql_instance>=tcp:5432
- <Same Project><Same Zone><Some MySql cloud instance>=tcp:3306
When adding the database connection in the Metabase UI, the IP address needs to be set to 127.17.0.1 per Google documentation on the flex environment.

AWS - Managing EC2 user sessions

I have an EC2 (Windoes Server R2 2012) that is running a NodeJS process. Now, the detail is that once I get Disconnected from server (Disconnection it is supposed to keep everything on server, hence my NodeJS process) after a few minutes, the session is killed, so also the process.
I connect again and it is like I have signed out. How can I solve this? I really need to the process still running without stay ALWAYS connected to the server.
Normally Windows installation will not logoff disconnected sessions.
Your installation is probably customized for it to be logging off disconnected RDP sessions.
Load the Group Policy Object Editor and go to Computer Configuration -> Administrative Templates -> Windows Components -> Remote Desktop Services -> Remote Desktop Session Host -> Session Time Limits.
Check that the setting "Set time limit for disconnected sessions" is either Not Configured, or Enabled, but with a setting of Never.

VSTS Azure File Copy task - Access denied

The VSTS task - Azure File Copy keeps giving me an access denied error, even though I have configured WinRM over HTTPS for my Azure VM.
I am running the build agent locally (not hosted) and from my machine, I am successfully able to PsRemote into my Azure VM.
i.e. Enter-PsSession executes successfully.
I tried giving all sorts of combinations for the user from .\Administrator .\administrator nithish and .\nithish (which is the user name I chose while creating the VM)
What can be the problem here?
Detailed error
Connecting to remote server dscwitharm.eastus2.cloudapp.azure.com failed with the following error message : Access is denied. For more information, see the about_Remote_Troubleshooting Help topic.To fix WinRM connection related issues, select the 'Enable Copy Prerequisites' option in the task. If set already, and the target Virtual Machines are backed by a Load balancer, ensure Inbound NAT rules are configured for target port (5986). Applicable only for ARM VMs. For more info please refer to https://aka.ms/azurefilecopyreadme
Please try to use HOSTNAME\username instead of username in VSTS task. I had same problem, and it solved now.
In Your case it will be DSCWITHARM\admin_username_or_whatever_you_are_using
I have similar problem.
During vsts task copy to ARM VMs does not work at all. I can conect to target host via ssl version of WinRM using powershell.
Upload on blob storage also works fine.
2016-09-16T13:22:30.9409877Z ##[error]Connecting to remote server _______________ failed with the following error message : Access is denied. For more information, see the about_Remote_Troubleshooting Help topic.To fix WinRM connection related issues, select the 'Enable Copy Prerequisites' option in the task. If set already, and the target Virtual Machines are backed by a Load balancer, ensure Inbound NAT rules are configured for target port (5986). Applicable only for ARM VMs. For more info please refer to https://aka.ms/azurefilecopyreadme
2016-09-16T13:22:30.9878694Z Finishing task: AzureFileCopy

filezilla Connection timed out

This might seem like a duplicate question but it is not. I tried to go through similar questions but I couldn't find a fix for my problem. Here is my problem:
I need to set up an ftp connection on company servers.
I can easily connect to ftp server from fileZilla on my PC but when I try it over one of the server machines to the file server all I see is the following:
Response: fzSftp started
Command: open "*****#***.***.***.**" **
Error: Connection timed out
Error: Could not connect to server
Status: Waiting to retry...
Status: Connecting to ***.***.***.**...
Response: fzSftp started
Command: open "*****#***.***.***.**" **
Error: Connection timed out
Error: Could not connect to server
I googled the "Connection timed out"
error and I realized that the first place to check is firewall or router setting. these are outsourced to another company and they say that the issue is solved and it should work fine. I don't know where to look at.
I've had lots of issues with Filezilla. You may try another software first to see if Filezilla itself is the issue.
If you're on Windows, I highly suggest the open source project WinSCP (https://winscp.net/eng/download.php). For Mac, Cyberduck (https://cyberduck.io/?l=en) is solid (and free), though you may prefer Transmit.
I was having this problem after upgrading Filezilla. I downgraded it to a previous version and it worked like charm. I came across this ticket thread and it was absolutely helpful : Filezilla Support Ticket
Check your security group rules. You need a security group rule that allows inbound traffic from your public IP address(Google: What is my ip?) on the proper port.
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
In the navigation pane, choose Instances, and then select your instance.
In the Description tab, next to Security groups, choose view rules to display the list of rules that are in effect.
For Linux instances: Verify that there is a rule that allows traffic from your computer(public ip) to port 22 (SSH).
For Windows instances: Verify that there is a rule that allows traffic from your computer(public ip) to port 3389 (RDP).
Also take a look at here and here for more details
I need to set up an ftp connection on company servers. I can easily connect to ftp server from fileZilla on my PC but when I try it over one of the server machines to the file server all I see is the following:
<failure to connect code>
Please note that public IP and internel IPs will be a different address; such as 123.456.675.574 for the public but internal to the server network it will be something more like 192.168.10.574 .
This is why you can easily connect from your PC because it uses the public IP address but from the internal IP network of the company servers that address will not be valid, and the internal one would need to be used instead.
Try this, 200 is just an example, just increase it and try.
Edit --> Settings --> Connection --> Timeout in seconds = 200