Unable to do WinRM from Domain machine to Standalone machine (both are Azure VM) - azure-virtual-machine

I am trying to do WINRM on standalone machine from Devops pipeline agent machine. Both machines are part of our network and both are Azure VM.
Analysis/ activities so far:
I have domain user access on agent machine
I have Local admin user on standalone machine
WinRM service under Services shows Running on both machine.
Added standalone private ip on agent machine TrustedHosts
Already executed EnablePSRemoting -SkipNetworkProfileCheck command on both machines
executed netstat -ab on standalone machine and it shows 5985 port listening, 5986 is not in this list
$username='myUserName'
$password='myPassword'
$secpasswd = ConvertTo-SecureString $password -AsPlainText -Force
$credentials = New-Object System.Management.Automation.PSCredential($username, $secpasswd)
Enter-PSSession -ComputerName $name -Port 5985 -Credential $credentials
username/ password in this code is local admin user cred of standalone machine.
Still failed to connect.

• I would suggest you to please check whether the below group policy settings are configured correctly on the domain machine and the standalone VM. On the domain group policy server, go to all the GPOs and check whether the below settings are configured or not: -
Computer Configuration -> Administrative Templates -> Windows Components -> Windows Remote Management -> WinRM Client -> Allow Basic authentication -> Enabled; Allow CredSSP authentication -> Enabled; Allow Unencrypted traffic -> Enabled; WinRM Service -> Allow Remote Server Management through WinRM -> Enabled; Allow Basic authentication -> Enabled; Allow CredSSP authentication -> Enabled; Allow Unencrypted traffic -> Enabled; Turn On Compatibility HTTPS Listener -> Enabled ‘
Similarly, as above, in the same path, go to Windows Remote Shell -> Allow Remote Shell Access -> Enabled. Once these group policy settings are enabled on both the domain Group policy server as well as the standalone Azure VM, the WinRM connection should happen successfully. Also, ensure to enable these settings in local group policy on the domain joined Azure VM and the standalone Azure VM too.
• Once the above has been done, ensure that the commands below are executed in command prompt on the standalone Azure VM and the Devops pipeline agent machine also. The commands will locate the WinRM listener and the addresses. Also, the subsequent commands will configure the WinRM service with default settings: -
winrm e winrm/config/listener
winrm get winrm/config
winrm quickconfig -> Y
The output of the above last command will be as follows: -
WinRM has been updated for remote management.
WinRM service type changed to delayed auto start.
WinRM service started.
Created a WinRM listener on https://* to accept WS-Man requests to any IP on this machine. ‘
The above commands should be executed in elevated privileges command prompt only. Also, along with that, ensure that the HTTPS port 5986 for WinRM is open from both ends to ensure HTTPS traffic. For more information, please refer to the below link for more details: -
https://learn.microsoft.com/en-us/windows/win32/winrm/installation-and-configuration-for-windows-remote-management

Related

WinRM not working with internal ip address of GCP from vpn source

On connecting my on-prem network with GCP VPC using GCP VPN, from the on-prem network can i access the virtual machine in GCP VPC using the internal IP address. I have configured GCP windows VM to enable WinRM and created firewall rules in GCP and made sure WinRM service is on with appropriate ports open.
If I use external IP I can run the command to get output from a machine which is not on VPN.
Invoke-Command -ComputerName <ExternalIP> -ScriptBlock {Get-UICulture} -Credential $credential
If I run the same command from a machine which is on VPN network with internal ip, it gives me error
Connecting to remote server 10.xxx.x.xx failed with the following
error message : The WinRM client cannot process the request. If the
authentication scheme is different from Kerberos, or if the client
computer is not joined to a domain, then HTTPS transport must be used
or the destination machine must be added to the TrustedHosts
configuration setting. Use winrm.cmd to configure TrustedHosts. Note
that computers in the TrustedHosts list might not be authenticated.
You can get more information about that by running the following
command: winrm help config. For more information, see the
about_Remote_Troubleshooting Help topic.
CategoryInfo : OpenError: (10.xxx.x.xx:String) [],
PSRemotingTransportException FullyQualifiedErrorId :
ServerNotTrusted,PSSessionStateBroken
Not sure as all the firewall rules are fine and there is no rule which says that I can't use internal ip address for winrm commands.

Activate GCP windows server VM failed

I created a windows 2012 server VM in GCP with 1 internal IP and 1 ephemeral external IP,
I can ping the the google KMS server 35.190.247.13,
but when I try activate the windows with
cscript \windows\system32\slmgr.vbs /dlv
cscript \windows\system32\slmgr.vbs /skms 35.190.247.13:1688
cscript \windows\system32\slmgr.vbs /ato
the last step prompt error "The Software Licensing Service reported that the computer could not be activated. No Key Management Service (KMS) could be contacted. Please see the Application Event Log for additional information."
Do I need to enable google private access in my VPC subnet even though I have an external IP?
ping is based upon ICMP, which is not associated with any transport layer (that's why it is impossible to ping a specific port). The command to test tcp/1688 connectivity rather is:
powershell.exe Test-NetConnection 35.190.247.13 -Port 1688
If this doesn't work, you have to permit this destination in the firewall ruleset:
Address: 35.190.247.13
Port: 1688
Protocol: TCP
See Windows Licensing.

Google Cloud Platform fix SSH

I have a problem with SSH access on my google compute engine. I made a server, set up the application on it and configured the domain and everything works. After a few days when I wanted to approach her to make changes my SSH didn’t work. I have one assumption I was turning on firewall and I didn't add a rule for SSH, maybe that's a problem? But how to access the machine now and enable it?
Thanks in advance .
To solve your issue you can connect to your VM instance via serial console. Before connecting to the VM via serial console check if you enabled connections to your VM instance at GCP Firewall.
Please have a look at the step by step instructions below:
Enable serial console connection with gcloud command:
gcloud compute instances add-metadata NAME_OF_YOUR_VM_INSTANCE \
--metadata serial-port-enable=TRUE
or go to Compute Engine -> VM instances -> click on NAME_OF_YOUR_VM_INSTANCE -> click on EDIT -> go to section Remote access and check Enable connecting to serial ports
Create temporary user and password to login: shutdown your VM and set a startup script by adding at the section Custom metadata key startup-script and value:
#!/bin/bash
useradd --groups google_sudoers tempuser
echo "tempuser:password" | chpasswd
and then start your VM.
Connect to your VM via serial port with gcloud command:
gcloud compute connect-to-serial-port NAME_OF_YOUR_VM_INSTANCE
or go to Compute Engine -> VM instances -> click on NAME_OF_YOUR_VM_INSTANCE -> and click on Connect to serial console
Check what went wrong.
Disable access via serial port with gcloud command:
gcloud compute instances add-metadata NAME_OF_YOUR_VM_INSTANCE \
--metadata serial-port-enable=FALSE
or go to Compute Engine -> VM instances -> click on NAME_OF_YOUR_VM_INSTANCE -> click on EDIT -> go to section Remote access and uncheck Enable connecting to serial ports. Keep in mind that accordingly to the documentation Interacting with the serial console:
Caution: The interactive serial console does not support IP-based access
restrictions such as IP whitelists. If you enable the interactive
serial console on an instance, clients can attempt to connect to that
instance from any IP address. Anybody can connect to that instance if
they know the correct SSH key, username, project ID, zone, and
instance name. Use firewall rules to control access to your network
and specific ports.
In addition, have a look at 3rd party example Resolving getting locked out of a Compute Engine.
If you weren't able to connect via serial console check logs:
Go to Compute Engine -> VM instances -> click on NAME_OF_YOUR_VM -> at the VM instance details find section Logs and click on Serial port 1 (console)
Reboot your VM instance again.
Check full boot log for any errors or/and warnings.
If you found errors/warning related to disk space you can try to resize it accordingly to the documentation Resizing a zonal persistent disk, also accordingly to the article Recovering an inaccessible instance or a full boot disk:
If nothing helped, try to follow other recommendations from the documentation Troubleshooting SSH and update your question with your attempts.

Airflow integration with AWS development machine to access admin UI

I am trying to use Airflow for workflow management on my development machine on aws. I have multiple virtual environments setup and have installed airflow.
I am listening to port 8080 in my nginx conf as:
listen private.ip:8080;
I have allowed inbound connection to port 8080 on my AWS machine.
I am unable to access my airflow console as well as admin page from my public ip / website address.
You can just create a tunnel for viewing UI locally.
ssh -N -L 8080:ec2-machineip-compute-x.amazonaws.com:8080 YOUR_USERNAME_FOR_MACHINE#ec2-machineip-compute-x.amazonaws.com:8080
ssh -N -L 8080:ec2-machineip-compute-x.amazonaws.com:8080 YOUR_USERNAME_FOR_MACHINE#ec2-machineip-compute-x.amazonaws.com:8080
localhost:8080 for viewing airflow 8080 UI

Can't access Cloud9 on port 8081 on Google cloud

I can't access Cloud9 on port 8081 running on Google cloud platform.
I am sure the application is running on that port and applications on the same machine on other ports (e.g. http://xxx.xxx.xxx.xxx:3000) are accessible correctly, so this doesn't seem to be an issue with the iptables settings.
I receive no response from the server http://xxx.xxx.xxx.xxx:8081/.
Google Cloud Platform configuration:
Allowed protocols and ports include tcp:8000-8089
IP Address set up as static and external
Command used to run Cloud9:
node server.js -w /home/workspace -l 0.0.0.0 -p 8081 -a username:password
The problem was that I have accidentally added some Target tags in the Google cloud firewall settings (Networking > Firewall rules).
Removing those tags solved my problem, I just use the default Apply to all targets setting.