Where is Fabric getting the default user from? - fabric

The Fabric documentation states:
Fabric defaults to your local username when making SSH connections
My standard username is dserodio, so there are references to this username in quite a few of my dotfiles, but on this network my username is dserodi, and Fabric is not getting the default username right:
>>> import os
>>> print os.environ['USER']
'dserodi'
>>> from fabric.operarations import *
>>> print env['user']
'dserodio'
>>> print env['local_user']
'dserodio'
Where's Fabric getting this default username from?

Fabric can get a username from a few places
by default, fabric uses a operating-system independent method of working out the user you are logged in as.
but it can be overridden for all hosts using env.user (or via -u or --user command line option)
or it can be overridden for one (or more) hosts in env.hosts (or via -H or --hosts command line option)
it can be overridden temporarily for a block in a settings context manager
it can be overridden temporarily for a single task using a call to execute
if you have env.use_ssh_config = True set, then your ~/.ssh/config may introduce an alias that includes a username.
But my guess is that you've got a ~/.fabricrc or some other file specified with (-c or --config) that has an additional username passed in.

Related

Matillion: Cannot login with default ec2-user

I've launched a new AWS EC2 instance (m5.large) based on Matillion's latest AMI (Matillion v1.56.9). The instance is coming up fine and I can reach Matillion's login page at https://[internal IP], but I cannot login with the default credentials which are supposed to be "ec2-user" and the instance id ("i-xxxxxx"). Error message is "Invalid username or password".
The EC2 instance has no public IP, that's why I use a private IP.
I can also ssh into the instance.
Can anyone help me find out why login using the default user doesn't work?
I believe the way it's supposed to work is at first boot the ec2-user password in /usr/share/tomcat8/conf/tomcat-users.xml gets set to the sha512sum of the instance ID. As per your comment Tobie that's a good spot but I think the Matillion documentation is just out of date there, from right back when instance IDs really were just 10 characters long!
I guess it uses the instance metadata service v1 to do that, so if IMDS v1 is not available it might not get created correctly.
In any case, as long as you can SSH into your server and the Admin / User Configuration is in Internal mode (which is the default)
you can fix the password manually like this...
Become root with sudo -i
Create the sha512sum of your chosen password like this.
echo -n "schepo" | sha512sum
Make sure you use the -n otherwise it adds a newline and gets the hash wrong. Mine comes out like 55aa...a1cf -
Then stop Tomcat so you can update the password
systemctl stop tomcat8
Fix the relevant line in /usr/share/tomcat8/conf/tomcat-users.xml or add a new one. You have to be really careful to keep the XML valid. Mine ends up like this:
<user username="schepo" password="55aa00778ccb153bc05aa6a8d7ee7c00f008397c5c70ebc8134aa1ba6cf682ac3d35297cbe60b21c00129039e25608056fe4922ebe1f89c7e2c68cf7fbfba1cf" roles="Emerald,API,Admin"/>
Then restart Tomcat
systemctl restart tomcat8
It normally takes about 60 seconds to restart. After that you should be able to login via the UI with your new user and/or password.

Executing scripts on remote server as user

I have a django service that is running under sudo account. All the operations like job submission, database operations etc. are performed by sudo account. However there are certain scripts that should be executed as user for traceability purpose later on. What is the best way to transfer the user password so that it can be taken as input to a script that does su myid and subsequent commands in the script are run under userid.
For example my server is running as sys_admin. The password mypassword should be given by the user from the webpage and passed onto the script as argument.
Below is my sample script:
su - myid <<! >/dev/null 2>&1
mypassword
whoami > /dev/tty
!
The above script will print myid and not sys_admin. What is the best and the most secure way to perform this operation so that password is not exposed to backend as well.
Can some form of encryption and decryption be performed on client or server side with some passphrase?
It sounds like your are directly passing user input through a sudo shell, this is VERY unsafe if you are, as this can lead to OS command injection
I'm sorry I couldn't directly assist you but if I'm right in how I took that it isnt a great idea to continue the current way it's being done
Maybe someone can offer a safer way of handling it? I'm not experienced in django

Whats the most secure way I can use python-ldap in my script to connect to my ldap server?

I have a script that is using the python-ldap module.
Here is my basic code that makes a connection to my ldap server:
server = 'ldap://example.com'
dn = 'uid=user1,cn=users,cn=accounts,dc=example,dc=com'
pw = "password!"
con = ldap.initialize(server)
con.start_tls_s()
con.simple_bind_s(dn,pw)
This works...but does the actual literal password have to be stored in the variable pw?? it seems like a bad idea to have a password stored right there in a script.
Is there a way to make a secure connection to my ldap server without needing to store my actual password in the script??
Placing the password in a separate file with restricted permissions is pretty much it. You can for example source that file from the main script:
. /usr/local/etc/secret-password-here
You could also restrict the permissions of the main script so that only authorized persons can execute it, but it's probably better to do as you suggest and store only the password itself in a restricted file. That way you can allow inspection of the code itself (without sensitive secrets), version-control and copy around the script more easily, etc...

How to access application default credentials from a GCE container without any Google API binding?

I am writing an application that runs in a container, on Google Container Engine, in a language that doesn't have any binding to the Google API.
I need to access the application default credentials. Unfortunately the official documentation doesn't explain how to do such a thing in production environment without using one of the existing bindings to the Google API.
In development environment (i.e. on my local dev machine) I export the GOOGLE_APPLICATION_CREDENTIALS variable, but it isn't available in the production container. Does it mean I have to use some endpoint from the REST API?
Ruby's implementation is open source and can be accessed here.
Different locations to check by priority
The get_application_default method clearly shows that:
the GOOGLE_APPLICATION_CREDENTIALS environment variable is checked,
then the PATH is checked,
then the default path /etc/google/auth is checked,
finally, if still nothing and on a compute instance, a new access token is fetched.
def get_application_default(scope = nil, options = {})
creds = DefaultCredentials.from_env(scope) ||
DefaultCredentials.from_well_known_path(scope) ||
DefaultCredentials.from_system_default_path(scope)
return creds unless creds.nil?
raise NOT_FOUND_ERROR unless GCECredentials.on_gce?(options)
GCECredentials.new
end
It is consistent with what says the official documentation:
The environment variable GOOGLE_APPLICATION_CREDENTIALS is checked. If
this variable is specified it should point to a file that defines the
credentials. [...]
If you have installed the Google Cloud SDK on your machine and have run
the command gcloud auth application-default login, your identity can
be used as a proxy to test code calling APIs from that machine.
If you
are running in Google App Engine production, the built-in service
account associated with the application will be used.
If you are running in Google Compute Engine production, the built-in service
account associated with the virtual machine instance will be used.
If none of these conditions is true, an error will occur.
Detecting GCE environment
The on_gce? method shows how to check whether we are on GCE by sending a GET/HEAD HTTP request to http://169.254.169.254. If there is a Metadata-Flavor: Google header in the response, then it's probably GCE.
def on_gce?(options = {})
c = options[:connection] || Faraday.default_connection
resp = c.get(COMPUTE_CHECK_URI) do |req|
# Comment from: oauth2client/client.py
#
# Note: the explicit `timeout` below is a workaround. The underlying
# issue is that resolving an unknown host on some networks will take
# 20-30 seconds; making this timeout short fixes the issue, but
# could lead to false negatives in the event that we are on GCE, but
# the metadata resolution was particularly slow. The latter case is
# "unlikely".
req.options.timeout = 0.1
end
return false unless resp.status == 200
return false unless resp.headers.key?('Metadata-Flavor')
return resp.headers['Metadata-Flavor'] == 'Google'
rescue Faraday::TimeoutError, Faraday::ConnectionFailed
return false
end
Fetching an access token directly from Google
If the default credentials could not be found on the filesystem and the application is running on GCE, we can ask a new access token without any prior authentication. This is possible because of the default service account, that is created automatically when GCE is enabled in a project.
The fetch_access_token method shows how, from a GCE instance, we can get a new access token by simply issuing a GET request to http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/token.
def fetch_access_token(options = {})
c = options[:connection] || Faraday.default_connection
c.headers = { 'Metadata-Flavor' => 'Google' }
resp = c.get(COMPUTE_AUTH_TOKEN_URI)
case resp.status
when 200
Signet::OAuth2.parse_credentials(resp.body,
resp.headers['content-type'])
when 404
raise(Signet::AuthorizationError, NO_METADATA_SERVER_ERROR)
else
msg = "Unexpected error code #{resp.status}" + UNEXPECTED_ERROR_SUFFIX
raise(Signet::AuthorizationError, msg)
end
end
Here is a curl command to illustrate:
curl \
http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/token \
-H 'accept: application/json' \
-H 'Metadata-Flavor: Google'

Gitlab (AWS) authentication using on-premise LDAP (Win 2008 R2)

I have installed GitLab Omnibus Community Edition 8.0.2 for evaluation purpose. I am trying to connect Gitlab (Linux AMI on AWS) with our on-premise LDAP server running on Win 2008 R2. However, i am unable to do so. I am getting following error (Could not authorize you from Ldapmain because "Invalid credentials"):
Here's the config i'm using for LDAP in gitlab.rb
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-'EOS' # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
label: 'LDAP'
host: 'XX.YYY.Z.XX'
port: 389
uid: 'sAMAccountName'
method: 'plain' # "tls" or "ssl" or "plain"
bind_dn: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
password: 'pwd1234'
active_directory: true
allow_username_or_email_login: true
base: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
user_filter: ''
EOS
There are two users: gitlab (newly created AD user) and john.doe (old AD user)
Both users are able to query all AD users using ldapsearch command but when i use their respective details (one at a time) in gitlab.rb and run gitlab-rake gitlab:ldap:check command, it displays info about that particular user only and not all users.
Earlier, gitlab-rake gitlab:ldap:check was displaying first 100 results from AD when my credential (john.doe) was configured in gitlab.rb file. Since this was my personal credential, i asked my IT team to create a new AD user (gitlab) for GitLab. After i configured new user (gitlab) in gitlab.rb file and ran gitlab-rake gitlab:ldap:check, it only displayed that particular user's record. I thought this might be due to some permission issue for the newly-created user so i restored my personal credentials in gitlab.rb. Surprisingly, now when i run gitlab-rake gitlab:ldap:check, i get only one record for my user instead of 100 records that i was getting earlier. This is really weird! I think, somehow, GitLab is "forgetting" previous details.
Any help will really be appreciated.
The issue is resolved now. Seems like it was a bug in the version (8.0.2) i was using. Upgrading it to 8.0.5 fixed my issue.
Also, values of bind_dn and base that worked for me are:
bind_dn: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
base: 'OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'