After creating a Java 8 Elastic Beanstalk instance with RDS, the RDS connection details are not visible as environment variables (they are visible on other instances that are running).
After running printenv command, the expectation was for these values to be available but they are not.
RDS_HOSTNAME=foo.com
RDS_USERNAME=foo
RDS_PASS=bar
These are required by the server config
database:
driverClass: com.mysql.jdbc.Driver
user: ${RDS_USERNAME}
password: ${RDS_PASSWORD}
url: jdbc:mysql://${RDS_HOSTNAME}/${RDS_DB_NAME}
During application firing they are not available, the logs shows a Java exception that it cannot find the environment variables.
io.dropwizard.configuration.UndefinedEnvironmentVariableException: The environment variable 'RDS_USERNAME' is not defined; could not substitute the expression '${RDS_USERNAME}'.
at io.dropwizard.configuration.EnvironmentVariableLookup.lookup(EnvironmentVariableLookup.java:41)
at org.apache.commons.lang3.text.StrSubstitutor.resolveVariable(StrSubstitutor.java:934)
at org.apache.commons.lang3.text.StrSubstitutor.substitute(StrSubstitutor.java:855)
at org.apache.commons.lang3.text.StrSubstitutor.substitute(StrSubstitutor.java:743)
at org.apache.commons.lang3.text.StrSubstitutor.replace(StrSubstitutor.java:403)
at io.dropwizard.configuration.SubstitutingSourceProvider.open(SubstitutingSourceProvider.java:39)
at io.dropwizard.configuration.BaseConfigurationFactory.build(BaseConfigurationFactory.java:83)
at io.dropwizard.cli.ConfiguredCommand.parseConfiguration(ConfiguredCommand.java:124)
at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:72)
at io.dropwizard.cli.Cli.run(Cli.java:75)
at io.dropwizard.Application.run(Application.java:93)
However if I run the following command on the ec2 instance
sudo /opt/elasticbeanstalk/bin/get-config environment
It prints the values out in JSON:
{"CONFIG":"dev.yml","RDS_HOSTNAME":"foo.com","RDS_PASSWORD":"foo","M2":"/usr/local/apache-maven/bin","M2_HOME":"/usr/local/apache-maven","RDS_DB_NAME":"foo","JAVA_HOME":"/usr/lib/jvm/java","RDS_USERNAME":"foo","GRADLE_HOME":"/usr/local/gradle","RDS_PORT":"3306"}
Any ideas how to restore these values for the ec2-user?
I have tried:
Restarting the EB instance
Rebuilding the instance
cat the values into a script that sets them after eb deploy
Any ideas, why they are not visible on this particular instance?
Instance details
Environment details foo: foo-service
Application name: foo-service
Region: eu-west-2
Platform: arn:aws:elasticbeanstalk:eu-west-2::platform/Java 8 running on 64bit Amazon Linux/2.6.0
Tier: WebServer-Standard
Check this out, AWS won't expose the environment variables directly in your OS as we expected:
https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-shell/
I would run the EB environment update and or replace the Instance. Or you can move to "Storing the Connection String in Amazon S3"
When the environment update is complete, the DB instance's hostname
and other connection information are available to your application
through the following environment properties:
RDS_HOSTNAME – The hostname of the DB instance. Amazon RDS console label – Endpoint (this is the hostname)
RDS_PORT – The port on which the DB instance accepts connections. The default value varies between DB engines.
Amazon RDS console label – Port
RDS_DB_NAME – The database name, ebdb.
Amazon RDS console label – DB Name
RDS_USERNAME – The user name that you configured for your database.
Amazon RDS console label – Username
RDS_PASSWORD – The password that you configured for your database.
private static Connection getRemoteConnection() {
if (System.getenv("RDS_HOSTNAME") != null) {
try {
Class.forName("org.postgresql.Driver");
String dbName = System.getenv("RDS_DB_NAME");
String userName = System.getenv("RDS_USERNAME");
String password = System.getenv("RDS_PASSWORD");
String hostname = System.getenv("RDS_HOSTNAME");
String port = System.getenv("RDS_PORT");
String jdbcUrl = "jdbc:postgresql://" + hostname + ":" + port + "/" + dbName + "?user=" + userName + "&password=" + password;
logger.trace("Getting remote connection with connection string from environment variables.");
Connection con = DriverManager.getConnection(jdbcUrl);
logger.info("Remote connection successful.");
return con;
}
catch (ClassNotFoundException e) { logger.warn(e.toString());}
catch (SQLException e) { logger.warn(e.toString());}
}
return null;
}
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-rds.html#java-rds-javase
Can you try to initialize them in /etc/environment? When I need to be sure an environment variable exists I make sure and add it there. All user shells will get that variable set that way.
Related
TL;DR: I am spawning an EC2 instance using an autoscale group, and I can connect to it. But I cannot successfully log in to that instance using the SSH key pair I specified in the autoscale group.
I have used Terraform to create an autoscale group to launch an EC2 instance. Here is the autoscale group:
module "ssh_key_pair" {
source = "cloudposse/key-pair/aws"
version = "0.18.3"
name = "myproj-ec2"
ssh_public_key_path = "."
generate_ssh_key = true
}
module "autoscale_group" {
source = "cloudposse/ec2-autoscale-group/aws"
version = "0.30.0"
name = "myproj"
image_id = data.aws_ami.amazon_linux_2.id
instance_type = "t2.small"
security_group_ids = [module.sg.id]
subnet_ids = module.subnets.public_subnet_ids
health_check_type = "EC2"
min_size = 1
desired_capacity = 1
max_size = 1
wait_for_capacity_timeout = "5m"
associate_public_ip_address = true
user_data_base64 = base64encode(templatefile("${path.module}/user_data.tpl", { cluster_name = aws_ecs_cluster.default.name }))
key_name = module.ssh_key_pair.key_name
# Auto-scaling policies and CloudWatch metric alarms
autoscaling_policies_enabled = true
cpu_utilization_high_threshold_percent = "70"
cpu_utilization_low_threshold_percent = "20"
}
And the user_data.tpl file looks like this:
#!/bin/bash
echo ECS_CLUSTER=${cluster_name} >> /etc/ecs/ecs.config
# Set up crontab file
echo "MAILTO=webmaster#myproj.com" >> /var/spool/cron/ec2-user
echo " " >> /var/spool/cron/ec2-user
echo "# Clean docker files once a week" >> /var/spool/cron/ec2-user
echo "0 0 * * 0 /usr/bin/docker system prune -f" >> /var/spool/cron/ec2-user
echo " " >> /var/spool/cron/ec2-user
start ecs
The instance is spawned, and when I SSH into the spawned instance using the DNS name for the first time, I can successfully connect. (The SSH server returns a host key on first connect, the same one listed in the instance's console output. After approving it, the host key is added to ~/.ssh/known_hosts.)
However, despite having created an ssh_key_pair and specifying the key pair's key_name when creating the autoscale group, I am not able to successfully log in to the spawned instance. (I've checked, and the key pair exists in the AWS console using the expected name.) When I use SSH on the command line, specifying the private key half of the key pair created, the handshake above succeeds, but then the connection ultimately fails with:
debug1: No more authentication methods to try.
ec2-user#myhost.us-east-2.compute.amazonaws.com: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
When I use the Connect button in the AWS Console and click the "SSH client" tab, it says:
No associated key pair
This instance is not associated with a key pair. Without a key pair, you can't connect to the instance through SSH.
You can connect using EC2 Instance Connect with just a valid username. You can connect using Session Manager if you have been granted the necessary permissions.
I also can't use EC2 Instance Connect, which fails with:
There was a problem connecting to your instance
Log in failed. If this instance has just started up, wait a few minutes and try again. Otherwise, ensure the instance is running on an AMI that supports EC2 Instance Connect.
I'm using the most_recent AMI with regex amzn2-ami-ecs-hvm.*x86_64-ebs, which as I understand it comes pre-installed with EC2 Instance Connect.
Am I missing a step in the user_data template? I also read something somewhere about the instance's roles possibly affecting this, but I can't figure out how to configure that with an automatically generated instance like this.
What you've posted now, and in your previous questions, is correct. There is no reason why you won't be able to ssh into the instance.
You must make sure that you are using myproj-ec2 private ssh key in your ssh command, for example:
ssh -i ./myproj-ec2 ec2-user#<instance-public-ip-address>
Also ec2-instance-connect is not installed on ECS-optimized instances. You would have to manually install it if you want to use it.
p.s. I'm not checking your user_data or any iam roles, as they are not related to your ssh issues. If you have issues with those, new question should be asked.
I created a RDS database via aws console. The database is built with aurora MySLQ.
The problem is that it does not create a database it creates a cluster with at least two instances
spring:
datasource:
reader:
username: admin
password: mypassword
driver-class-name: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://cr-management-database-instance-1-us-west-2a.XXXXXXXXX.us-west-2.rds.amazonaws.com:3306/<Database name>
pattern: get*,find*
writer:
username: admin
password: mypassword
driver-class-name: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://cr-management-database-instance-1.XXXXXXXXXX.us-west-2.rds.amazonaws.com:3306/<Database name>
pattern: add*,update*
So I have a cluster cr-management-database
And have two instances
cr-management-database-instance-1 and
cr-management-database-instance-1-us-west-2a
But the console does not create a DataBase name So I cant complete the URL.
I am used to working with ordinary DBs, does aurora MySLQ RDS have a database name. And how do you create in via the console.
I've set up an Aurora PostgreSQL compatible database. I can connect to the database via the public address but I'm not able to connect via a Lambda function which is placed in the same VPC.
This is just a test environment and the security settings are weak. In the network settings I tried to use "no VPC" and I tried my default VPC where the database and the lambda are placed. But this doesn't make a difference.
This is my nodejs code to create a simple Select statement:
var params = {
awsSecretStoreArn: '{mySecurityARN}',
dbClusterOrInstanceArn: 'myDB-ARN',
sqlStatements: 'SELECT * FROM userrole',
database: 'test',
schema: 'user'
};
const aurora = new AWS.RDSDataService();
let userrightData = await aurora.executeSql(params).promise();
When I start my test in the AWS GUI I get following errors:
"errorType": "UnknownEndpoint",
"errorMessage": "Inaccessible host: `rds-data.eu-central- 1.amazonaws.com'. This service may not be available in the `eu-central-1' region.",
"trace": [
"UnknownEndpoint: Inaccessible host: `rds-data.eu-central-1.amazonaws.com'. This service may not be available in the `eu-central-1' region.",
I've already checked the tutorial of Amazon but I can't find a point I did't try.
The error message "This service may not be available in the `eu-central-1' region." is absolutely right because in eu-central-1 an Aurora Serverless database is not available.
I configured an Aurora Postgres and not an Aurora Serverless DB.
"AWS.RDSDataService()", what I wanted to use to connect to the RDB, is only available in relation to an Aurora Serverless instance. It's described here: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/RDSDataService.html.
This is the reason why this error message appeared.
Given an instance id, I want to get an EC2 instance info (for example, its running status, private IP, public IP).
I have done some research (i.e. looking at the sample code posted here Managing Amazon EC2 Instances)
but there is only sample code of getting the Amazon EC2 instances for your account and region.
I tried to modify the sample and here is what I came up with:
private static AmazonEC2 getEc2StandardClient() {
// Using StaticCredentialsProvider
final String accessKey = "access_key";
final String secretKey = "secret_key";
BasicAWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
return AmazonEC2ClientBuilder.standard()
.withRegion(Regions.AP_NORTHEAST_1)
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.build();
}
public static void getInstanceInfo(String instanceId) {
final AmazonEC2 ec2 = getEc2StandardClient();
DryRunSupportedRequest<DescribeInstancesRequest> dryRequest =
() -> {
DescribeInstancesRequest request = new DescribeInstancesRequest()
.withInstanceIds(instanceId);
return request.getDryRunRequest();
};
DryRunResult<DescribeInstancesRequest> dryResponse = ec2.dryRun(dryRequest);
if(!dryResponse.isSuccessful()) {
System.out.println("Failed to get information of instance " + instanceId);
}
DescribeInstancesRequest request = new DescribeInstancesRequest()
.withInstanceIds(instanceId);
DescribeInstancesResult response = ec2.describeInstances(request);
Reservation reservation = response.getReservations().get(0);
Instance instance = reservation.getInstances().get(0);
System.out.println("Instance id: " + instance.getInstanceId(), ", state: " + instance.getState().getName() +
", public ip: " + instance.getPublicIpAddress() + ", private ip: " + instance.getPrivateIpAddress());
}
It is working fine but I wonder if it's the best practice to get info from a single instance.
but there is only sample code of getting the Amazon EC2 instances for your account and region.
Yes, you may get only instance information you have permission to read.
It is working fine but I wonder if it's the best practice to get info from a single instance
You have multiple options.
For getting EC2 metadata from any client (e.g. from your on-premise network) your code seems ok.
If you are running the code in the AWS environment (on an EC2, lambda, docker, ..) you may specify a service role allowed calling the describeInstances operation from the service. Then you don't need to specify the AWS credentials explicitly (DefaultAWSCredentialsProviderChain will work).
If you are getting the EC2 metadata from the instance itself, you can use the EC2 metadata service.
I used Ansible to create a gce cluster following the guideline at: https://docs.ansible.com/ansible/latest/scenario_guides/guide_gce.html
And at the end of the GCE creations, I used the add_host Ansible module to register all instances in their corresponding groups. e.g. gce_master_ip
But then when I try to run the following tasks after the creation task, they would not work:
- name: Create redis on the master
hosts: gce_master_ip
connection: ssh
become: True
gather_facts: True
vars_files:
- gcp_vars/secrets/auth.yml
- gcp_vars/machines.yml
roles:
- { role: redis, tags: ["redis"] }
Within the auth.yml file I already provided the service account email, path to the json credential file and the project id. But apparently that's not enough. I got errors like below:
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey).\r\n", "unreachable": true}
This a typical ssh username and credentials not permitted or not provided. In this case I would say I did not setup anything of the username and private key for the ssh connection that Ansible will use to do the connecting.
Is there anything I should do to make sure the corresponding credentials are provided to establish the connection?
During my search I think one question just briefly mentioned that you could use the gcloud compute ssh... command. But is there a way I could specify in Ansible to not using the classic ssh and use the gcloud one?
To have Ansible SSH into a GCE instance, you'll have to supply an SSH username and private key which corresponds to the the SSH configuration available on the instance.
So the question is: If you've just used the gcp_compute_instance Ansible module to create a fresh GCE instance, is there a convenient way to configure SSH on the instance without having to manually connect to the instance and do it yourself?
For this purpose, GCP provides a couple of ways to automate and manage key distribution for GCE instances.
For example, you could use the OS Login feature. To use OS Login with Ansible:
When creating the instance using Ansible, Enable OS Login on the target instance by setting the "enable-oslogin" metadata field to "TRUE" via the metadata parameter.
Make sure the Service Account attached to the instance that runs Ansible have both the roles/iam.serviceAccountUser and roles/compute.osLoginAdmin permissions.
Either generate a new or choose an existing SSH keypair that will be deployed to the target instance.
Upload the public key for use with OS Login: This can be done via gcloud compute os-login ssh-keys add --key-file [KEY_FILE_PATH] --ttl [EXPIRE_TIME] (where --ttl specifies how long you want this public key to be usable - for example, --ttl 1d will make it expire after 1 day)
Configure Ansible to use the Service Account's user name and the private key which corresponds to the public key uploaded via the gcloud command. For example by overriding the ansible_user and ansible_ssh_private_key_file inventory parameters, or by passing --private-key and --user parameters to ansible-playbook.
The service account username is the username value returned by the gcloud command above.
Also, if you want to automatically set the enable-oslogin metadata field to "TRUE" across all instances in your GCP project, you can simply add a project-wide metadata entry. This can be done in the Cloud Console under "Compute Engine > Metadata".