I've found a very strange issue with AWS API, and this is not happening all the time.
Let me explain what I'm trying to achieve.
I've created a deployment procedure with node.js
The deployment procedure is used even by the application to deploy other instances from the script. So, the application once deployed to AWS, can deploy other instances in AWS by itself.
All communication is done through SSH and AWS API is used to create policies, key pairs, etc.
The private key of SSH used to connect to the Primary Server (master) stays on my PC. Let's say this key pair is "KEY-M". The private keys for the rest client instances (that are automatically deployed by the primary) stay on the primary server. Let's say this key pair is "KEY-C". So, two ssh key pairs are created by the application.
After deploying, The key pairs are correctly set and attached as shown in the AWS instances list:
That Is,
MASTER => KEY-M
CLIENT => KEY-C
But when I looked into the authorized_keys by connecting through the AWS interface, both the master and the client have the same content, i.e. public key of "KEY-M"
Below is the code to understand the run-instance method call for deployment:
const deploy_key = await aws.createDeployKeyIfNotExists(region);
...
...
const run_config = {
ImageId: image.ami_id,
InstanceType: this.options.type || 't2.micro',
MinCount: Math.max(this.options.number || 1, 1),
MaxCount: Math.max(this.options.number || 1, 1),
KeyName: deploy_key.name,
AssociatePublicIpAddress: true,
SecurityGroups: [security_group.GroupName],
};
const instances = (await aws.ec2(region).runInstances(run_config)).Instances;
Attached Keys
Key Pairs
Related
I am using AWS CDK to create a CloudFormation Stack with a RDS Aurora Cluster Database, VPC, Subnet, RouteTable and Security Group resources. And another Stack with a couple of Lambdas, API Gateway, IAM Roles and Policies and many other resources.
The CDK deployment works fine and I can see both stack created in CloudFormation with all the resources. But I had issues trying to connect with the RDS Database so I added a CfnOutput to check the connection string and realised that the RDS port was not resolved from it's original number-encoded token, while the hostname is resolved properly? So, I'm wondering why this is happening...
This is how I'm setting the CfnOutput:
new CfnOutput(this, "mysql-messaging-connstring", {
value: connectionString,
description: "Mysql connection string",
exportName: `${prefix}-mysqlconnstring`
});
The RDS Aurora Database Cluster is created in a method called createDatabaseCluster:
const cluster = new rds.DatabaseCluster(scope, 'Database', {
engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_5_7_12 }),
credentials: dbCredsSecret,
instanceProps: {
instanceType: ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.SMALL),
vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_ISOLATED },
vpc: vpc,
publiclyAccessible: true,
securityGroups: [ clusterSG ]
},
instances: 1,
instanceIdentifierBase: dbInstanceName,
});
This createDatabaseCluster method returns the connection string:
return `server=${cluster.instanceEndpoints[0].hostname};user=${username};password=${password};port=${cluster.instanceEndpoints[0].port};database=${database};`;
In this connection string, the DB credentials are retrieved from a secret in AWS Secrets Manager and stored in username and password variables to be used in the return statement.
The actual observed value of the CfnOutput is as follow:
As a workaround, I can just specify the port to be used but I want to understand what's the reason why this number-encoded token is not being resolved.
How I can create an Athena data source in AWS CDK which is a JDBC connection to a MySQL database using the AthenaJdbcConnector?
I believe I can use aws-sam's CfnApplication to create the AthenaJdbcConnector Lambda, but how can I connect it to Athena?
I notice a lot of Glue support in CDK which would transfer to Athena (data catalog), and there are several CfnDataSource types in other modules such as QuickSight, but I'm not seeing anything under Athena in CDK.
See the image and references below.
References:
https://docs.aws.amazon.com/athena/latest/ug/athena-prebuilt-data-connectors-jdbc.html
https://github.com/awslabs/aws-athena-query-federation/tree/master/athena-jdbc
https://serverlessrepo.aws.amazon.com/applications/us-east-1/292517598671/AthenaJdbcConnector
I have been playing with the same issue. Here is what I did to create the Lambda for federated queries (Typescript):
const vpc = ec2.Vpc.fromLookup(this, 'my-project-vpc', {
vpcId: props.vpcId
});
const cluster = new rds.ServerlessCluster(this, 'AuroraCluster', {
engine: rds.DatabaseClusterEngine.AURORA_POSTGRESQL,
parameterGroup: rds.ParameterGroup.fromParameterGroupName(this, 'ParameterGroup', 'default.aurora-postgresql10'),
defaultDatabaseName: 'MyDB',
vpc,
vpcSubnets: {
onePerAz: true
},
scaling: {autoPause: cdk.Duration.seconds(0)} // Optional. If not set, then instance will pause after 5 minutes
});
let password = cluster.secret!.secretValueFromJson('password').toString()
let spillBucket = new Bucket(this, "AthenaFederatedSpill")
let lambdaApp = new CfnApplication(this, "MyDB", {
location: {
applicationId: "arn:aws:serverlessrepo:us-east-1:292517598671:applications/AthenaJdbcConnector",
semanticVersion: "2021.42.1"
},
parameters: {
DefaultConnectionString: `postgres://jdbc:postgresql://${cluster.clusterEndpoint.hostname}/MyDB?user=postgres&password=${password}`,
LambdaFunctionName: "crossref_federation",
SecretNamePrefix: `${cluster.secret?.secretName}`,
SecurityGroupIds: `${cluster.connections.securityGroups.map(value => value.securityGroupId).join(",")}`,
SpillBucket: spillBucket.bucketName,
SubnetIds: vpc.privateSubnets[0].subnetId
}
})
This creates the lambda with a default connection string like you would have it, if you used the AWS Console wizard in Athena to connect to a DataSource. Unfortunately it is NOT possible to add a Athena-catalog specific connection string via CDK. It should be set as an Environment Variable on the Lambda, and I found no way to do that. The Application template simply don't allow it, so this is a post-process by hand. I would sure like to hear from anybody if they have a solution for that!
Also notice that I add the user/password in the jdbc URL directly. I wanted to use SecretsManager, but because the Lambda is deployed in a VPC, it simply refuses to connect to the secretsmanager. I think this might be solvable by added a private VPN connection to SSM. Again - I would like to hear from anybody have tried that.
I've set up an Aurora PostgreSQL compatible database. I can connect to the database via the public address but I'm not able to connect via a Lambda function which is placed in the same VPC.
This is just a test environment and the security settings are weak. In the network settings I tried to use "no VPC" and I tried my default VPC where the database and the lambda are placed. But this doesn't make a difference.
This is my nodejs code to create a simple Select statement:
var params = {
awsSecretStoreArn: '{mySecurityARN}',
dbClusterOrInstanceArn: 'myDB-ARN',
sqlStatements: 'SELECT * FROM userrole',
database: 'test',
schema: 'user'
};
const aurora = new AWS.RDSDataService();
let userrightData = await aurora.executeSql(params).promise();
When I start my test in the AWS GUI I get following errors:
"errorType": "UnknownEndpoint",
"errorMessage": "Inaccessible host: `rds-data.eu-central- 1.amazonaws.com'. This service may not be available in the `eu-central-1' region.",
"trace": [
"UnknownEndpoint: Inaccessible host: `rds-data.eu-central-1.amazonaws.com'. This service may not be available in the `eu-central-1' region.",
I've already checked the tutorial of Amazon but I can't find a point I did't try.
The error message "This service may not be available in the `eu-central-1' region." is absolutely right because in eu-central-1 an Aurora Serverless database is not available.
I configured an Aurora Postgres and not an Aurora Serverless DB.
"AWS.RDSDataService()", what I wanted to use to connect to the RDB, is only available in relation to an Aurora Serverless instance. It's described here: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/RDSDataService.html.
This is the reason why this error message appeared.
I used Ansible to create a gce cluster following the guideline at: https://docs.ansible.com/ansible/latest/scenario_guides/guide_gce.html
And at the end of the GCE creations, I used the add_host Ansible module to register all instances in their corresponding groups. e.g. gce_master_ip
But then when I try to run the following tasks after the creation task, they would not work:
- name: Create redis on the master
hosts: gce_master_ip
connection: ssh
become: True
gather_facts: True
vars_files:
- gcp_vars/secrets/auth.yml
- gcp_vars/machines.yml
roles:
- { role: redis, tags: ["redis"] }
Within the auth.yml file I already provided the service account email, path to the json credential file and the project id. But apparently that's not enough. I got errors like below:
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey).\r\n", "unreachable": true}
This a typical ssh username and credentials not permitted or not provided. In this case I would say I did not setup anything of the username and private key for the ssh connection that Ansible will use to do the connecting.
Is there anything I should do to make sure the corresponding credentials are provided to establish the connection?
During my search I think one question just briefly mentioned that you could use the gcloud compute ssh... command. But is there a way I could specify in Ansible to not using the classic ssh and use the gcloud one?
To have Ansible SSH into a GCE instance, you'll have to supply an SSH username and private key which corresponds to the the SSH configuration available on the instance.
So the question is: If you've just used the gcp_compute_instance Ansible module to create a fresh GCE instance, is there a convenient way to configure SSH on the instance without having to manually connect to the instance and do it yourself?
For this purpose, GCP provides a couple of ways to automate and manage key distribution for GCE instances.
For example, you could use the OS Login feature. To use OS Login with Ansible:
When creating the instance using Ansible, Enable OS Login on the target instance by setting the "enable-oslogin" metadata field to "TRUE" via the metadata parameter.
Make sure the Service Account attached to the instance that runs Ansible have both the roles/iam.serviceAccountUser and roles/compute.osLoginAdmin permissions.
Either generate a new or choose an existing SSH keypair that will be deployed to the target instance.
Upload the public key for use with OS Login: This can be done via gcloud compute os-login ssh-keys add --key-file [KEY_FILE_PATH] --ttl [EXPIRE_TIME] (where --ttl specifies how long you want this public key to be usable - for example, --ttl 1d will make it expire after 1 day)
Configure Ansible to use the Service Account's user name and the private key which corresponds to the public key uploaded via the gcloud command. For example by overriding the ansible_user and ansible_ssh_private_key_file inventory parameters, or by passing --private-key and --user parameters to ansible-playbook.
The service account username is the username value returned by the gcloud command above.
Also, if you want to automatically set the enable-oslogin metadata field to "TRUE" across all instances in your GCP project, you can simply add a project-wide metadata entry. This can be done in the Cloud Console under "Compute Engine > Metadata".
I am trying to upload a custom Public keypair to my Amazon AWS account because I would like to use my custom-generated keypair for communication with AWS. I am trying to perform this upload using Ansible's ec2_key module.
Here is what I have done so far:
STEP 1. Sign up for Amazon AWS account here. I entered an "AWS Account Name" and password.
STEP 2. Install Python packages AWS CLI and boto:
$ pip install awscli boto
STEP 3. Generate SSH keypair (I used Ansible for this as well):
- name: Generate a 2048-bit SSH key for user
user:
name: "{{ ansible_ssh_user }}"
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: ~/.ssh/id_rsa
STEP 4. I copied the contents of the public key (~/.ssh/id_rsa.pub) into /home/username/.aws/credentials.
STEP 5. Use Ansible task to upload public key to Amazon AWS with Ansible:
vars:
aws_access_key_id: my_key_name
aws_region: "us-west-2"
aws_secret_access_key: "ssh-rsa Y...r"
tasks:
- name: example3 ec2 key
ec2_key:
name: "{{ aws_access_key_id }}"
region: "{{ aws_region }}"
key_material: "{{ aws_secret_access_key }}"
state: present
force: True
validate_certs: yes
The output of step 5. is:
An exception occurred during task execution. ..."module_stderr": "Traceback (most
recent call last):\n File \"/tmp/ansible_WqbqHU/ansible_module_ec2_key.py\",...
raise self.ResponseError(response.status, response.reason,
body)\nboto.exception.EC2ResponseError: EC2ResponseError: 401 Unauthorized\n<?xml
version=\"1.0\" encoding=\"UTF-8\"?>
\n<Response><Errors><Error><Code>AuthFailure</Code><Message>AWS was not able to
validate the provided access credentials</Message></Error></Errors>...
Here is my /home/username/.aws/credentials (I just made up some key_id):
[default]
aws_access_key_id = my_key_name
aws_secret_access_key = ssh-rsa Y...r
Here is my /home/username/.aws/config:
[default]
output = json
region = us-west-2
Both of these files seem to agree with the AWS doc requirements here.
Additional Info:
Host system: Ubuntu 17.10 (non-root user)
The 2 Ansible tasks are run from separate Ansible playbooks - first the sshkeygen playbook is run and then the ec2_key playbook is run. Ansible playbooks are not run using become.
Ansible version = ansible==2.4.1.0
Boto version = boto==2.48.0, botocore==1.7.47
Questions
How can I instruct AWSCLI to communicate with my online account (STEP 1.)? It seems like I am missing this step somewhere in the Ansible task using the ec2_key module.
Currently, I have the SAME public key in (a) the 2nd Ansible task to upload the public key and (b) /home/username/.aws/credentials. Is this Ansible task missing something/incorrect? Should there be a 2nd public key?
You've put the SSH public key into the secret_access_key field.
It looks like this for me (letters mixed and replaced here of course, not my real key):
[Credentials]
aws_access_key_id = FMZAIQGTCHSLGSNDIPXT
aws_secret_access_key = gcmbyMs1Osj3ISCSasFtEx7sVCr92S3Mvjxlcwav
If you go to IAM (https://console.aws.amazon.com/iam), you can regenerate your keys.
You'll need to go to IAM->Users, click your username, click the Security Credentials, and "Create Access Key".
If you've just set up your account, it's likely that you don't have IAM users, only the so-called root account user (the one you signed up with). In this case, click your name at the top of the main screen, and select My Security Credentials. You might get a warning, but no worries in your case, you're not running a large organization. Click the Access Keys dropdown and click Create New Access Key (you might have none). This will give you the keys you need. Save them somewhere, because when you leave the screen, you'll no longer get the chance to see the secret access key, only the key ID.
However, if you are using a machine with a role attached, you don't need credentials at all, boto should pick them up.
Your secret access key looks wrong in your credentials. That should be associated with an IAM user or left blank if you’re running from an EC2 instance with an IAM role attached; not the key you’re trying to upload.
Can you SSH to the host already? I think the unauthorised error is coming from your attempted access to the host.
Something like:
ssh ec2-user#165.0.0.105 - i my_key.pem
You can see your public ip address in the aws console. If you face errors there, double check the instance is accepting ssh traffic in the security profile.