Azure VM, your credentials did not work on remote desktop - azure-virtual-machine

I've just had a bit of fun trying to connect to a new VM I'd created, I've found loads of posts from people with the same problem, the answer details the points I've found

(1) For me it worked with
<VMName>\Username
Password
e.g.
Windows8VM\MyUserName
SomePassword#1
(2) Some people have just needed to use a leading '\', i.e.
\Username
Password
Your credentials did not work Azure VM
(3) You can now reset the username/password from the app portal. There are powershell scripts which will also allow you to do this but that shouldn't be necessary anymore.
(4) You can also try redeploying the VM, you can do this from the app portal
(5) This blog says that "Password cannot contain the username or part of username", but that must be out of date as I tried that once I got it working and it worked fine
https://blogs.msdn.microsoft.com/narahari/2011/08/29/your-credentials-did-not-work-error-when-connecting-to-windows-azure-vms/
(6) You may find links such as the below which mention Get-AzureVM, that seems to be for classic VMs, there seem to be equivalents for the resource manager VMs such as Get-AzureRMVM
https://blogs.msdn.microsoft.com/mast/2014/03/06/enable-rdp-or-reset-password-with-the-vm-agent/
For complete novices to powershell, if you do want to go down that road here's the basics you may need. In the end I don't believe I needed this, just point 1
unInstall-Module AzureRM
Install-Module AzureRM -allowclobber
Import-Module AzureRM
Login-AzureRmAccount (this will open a window which takes you through the usual logon process)
Add-AzureAccount (not sure why you need both, but I couldn’t log on without this)
Select-AzureSubscription -SubscriptionId <the guid for your subscription>
Set-AzureRmVMAccessExtension -ResourceGroupName "<your RG name>" -VMName "Windows8VM" -Name "myVMAccess" -Location "northeurope" -username <username> -password <password>
(7) You can connect to a VM in a scale set as by default the Load Balancer will have Nat Rules mapping from port onwards 50000, i.e. just remote desktop to the IP address:port. You can also do it from a VM that isn't in the scale set. Go to the scale set's overview, click on the "virtual network/subnet", that'll give you the internal IP address. Remote desktop from the other one

Ran into similar issues. It seems to need domain by default. Here is what worked for me:
localhost\username
Other option can be vmname\username
Some more guides to help:
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/quick-create-portal#connect-to-virtual-machine
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/connect-logon

In April 2022 "Password cannot contain the username or part of username" was the issue.
During the creation of VM in Azure, everything was alright but wasn't able to connect via RDP.

Same in Nov 2022, you will be allowed to create a password that contains the user name but during login it will display the credential error. Removing the user name from the password fixed it.

Related

Matillion: Cannot login with default ec2-user

I've launched a new AWS EC2 instance (m5.large) based on Matillion's latest AMI (Matillion v1.56.9). The instance is coming up fine and I can reach Matillion's login page at https://[internal IP], but I cannot login with the default credentials which are supposed to be "ec2-user" and the instance id ("i-xxxxxx"). Error message is "Invalid username or password".
The EC2 instance has no public IP, that's why I use a private IP.
I can also ssh into the instance.
Can anyone help me find out why login using the default user doesn't work?
I believe the way it's supposed to work is at first boot the ec2-user password in /usr/share/tomcat8/conf/tomcat-users.xml gets set to the sha512sum of the instance ID. As per your comment Tobie that's a good spot but I think the Matillion documentation is just out of date there, from right back when instance IDs really were just 10 characters long!
I guess it uses the instance metadata service v1 to do that, so if IMDS v1 is not available it might not get created correctly.
In any case, as long as you can SSH into your server and the Admin / User Configuration is in Internal mode (which is the default)
you can fix the password manually like this...
Become root with sudo -i
Create the sha512sum of your chosen password like this.
echo -n "schepo" | sha512sum
Make sure you use the -n otherwise it adds a newline and gets the hash wrong. Mine comes out like 55aa...a1cf -
Then stop Tomcat so you can update the password
systemctl stop tomcat8
Fix the relevant line in /usr/share/tomcat8/conf/tomcat-users.xml or add a new one. You have to be really careful to keep the XML valid. Mine ends up like this:
<user username="schepo" password="55aa00778ccb153bc05aa6a8d7ee7c00f008397c5c70ebc8134aa1ba6cf682ac3d35297cbe60b21c00129039e25608056fe4922ebe1f89c7e2c68cf7fbfba1cf" roles="Emerald,API,Admin"/>
Then restart Tomcat
systemctl restart tomcat8
It normally takes about 60 seconds to restart. After that you should be able to login via the UI with your new user and/or password.

Ubuntu How to change welcome message

All,
I am using Ubuntu OS in my AWS EC2 instance. My previous developer has created some custom messages once we SSHed into the Instance (Attached). But I would like to change it. Googled extensively, but no luck. Can someone help?Text I want to change is "Live 1A"
Edit the file /etc/motd
motd stands for Message Of The Day
MOTD(5) - Linux Programmer's Manual
NAME
motd - message of the day
DESCRIPTION
The contents of /etc/motd are displayed by login(1) after a successful login but just before it executes the login shell.
The abbreviation "motd" stands for "message of the day", and this file has been traditionally used for exactly that (it requires
much less disk space than mail to all users).

Gitlab (AWS) authentication using on-premise LDAP (Win 2008 R2)

I have installed GitLab Omnibus Community Edition 8.0.2 for evaluation purpose. I am trying to connect Gitlab (Linux AMI on AWS) with our on-premise LDAP server running on Win 2008 R2. However, i am unable to do so. I am getting following error (Could not authorize you from Ldapmain because "Invalid credentials"):
Here's the config i'm using for LDAP in gitlab.rb
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-'EOS' # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
label: 'LDAP'
host: 'XX.YYY.Z.XX'
port: 389
uid: 'sAMAccountName'
method: 'plain' # "tls" or "ssl" or "plain"
bind_dn: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
password: 'pwd1234'
active_directory: true
allow_username_or_email_login: true
base: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
user_filter: ''
EOS
There are two users: gitlab (newly created AD user) and john.doe (old AD user)
Both users are able to query all AD users using ldapsearch command but when i use their respective details (one at a time) in gitlab.rb and run gitlab-rake gitlab:ldap:check command, it displays info about that particular user only and not all users.
Earlier, gitlab-rake gitlab:ldap:check was displaying first 100 results from AD when my credential (john.doe) was configured in gitlab.rb file. Since this was my personal credential, i asked my IT team to create a new AD user (gitlab) for GitLab. After i configured new user (gitlab) in gitlab.rb file and ran gitlab-rake gitlab:ldap:check, it only displayed that particular user's record. I thought this might be due to some permission issue for the newly-created user so i restored my personal credentials in gitlab.rb. Surprisingly, now when i run gitlab-rake gitlab:ldap:check, i get only one record for my user instead of 100 records that i was getting earlier. This is really weird! I think, somehow, GitLab is "forgetting" previous details.
Any help will really be appreciated.
The issue is resolved now. Seems like it was a bug in the version (8.0.2) i was using. Upgrading it to 8.0.5 fixed my issue.
Also, values of bind_dn and base that worked for me are:
bind_dn: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
base: 'OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'

error 5 when starting a service

i created a windows service in c++ and when i try start the service i get the message error 5: access denied.
my user account is set to admin and i even tried using the default admin account on the computer and it still doesn't work.
i can install/uninstall the service through the cmd without problems but i can't start the service
the code isn't the problem here its the user account. any suggestions on how to fix this?
"Running a service" is not simply "starting a program on my desktop". It does not necessarily run as "you".
The service is detached from any desktops and it actually ignores your user account. The service will have its own account/password configuration stored in the OS and when you run it, you only order it to start up. It will startup on its own user account. If you have put your .exe/.dll files in some protected folder, and if you have not configured neither the accessrights to that files nor user-pass for the service, then there's great odds that the service tries to run at default service user account like 'LocalService' or 'NetworkService' and that it simply cannot touch the files.
If you installed the service properly, go to ControlPanel - AdministrativeTools - Services, find your service and check the (if I remember well) second tab and verify that the username presented here has access to the files that are tried to be loaded and run. If the username is wrong, correct it. If you don't care about the username, then just peek that name and set accessrights on the folder and/or files such that at least both "read directry contents" and "read" and "execute" are available for that-username-the-service-tries-to-run-as.

Enabling HA namenodes on a secure cluster in Cloudera Manager fails

I am running a CDH4.1.2 secure cluster and it works fine with the single namenode+secondarynamenode configuration, but when I try to enable High Availability (quorum based) from the Cloudera Manager interface it dies at step 10 of 16, "Starting the NameNode that will be transitioned to active mode namenode ([my namenode's hostname])".
Digging into the role log file gives the following fatal error:
Exception in namenode joinjava.lang.IllegalArgumentException: Does not contain a valid host:port authority: [my namenode's fqhn]:[my namenode's fqhn]:0 at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:206) at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:158) at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:147) at
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:143) at
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:547) at
org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:480) at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:443) at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608) at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589) at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140) at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
How can I resolve this?
It looks like you have two problems:
The NameNode's IP address is resolving to "my namenode's fqhn" instead of a regular hostname. Check your /etc/hosts file to fix this.
You need to configure dfs.https.port. With Cloudera Manager free edition, you must have had to add the appropriate configs to the safety valves to enable security. As part of that, you need to configure the dfs.https.port.
Given that this code path is traversed even in the non-HA mode, I'm surprised that you were able to get your secure NameNode to start up correctly before enabling HA. In case you haven't already, I recommend that you first enable security, test that all HDFS roles start up correctly and then enable HA.