Given an instance id, I want to get an EC2 instance info (for example, its running status, private IP, public IP).
I have done some research (i.e. looking at the sample code posted here Managing Amazon EC2 Instances)
but there is only sample code of getting the Amazon EC2 instances for your account and region.
I tried to modify the sample and here is what I came up with:
private static AmazonEC2 getEc2StandardClient() {
// Using StaticCredentialsProvider
final String accessKey = "access_key";
final String secretKey = "secret_key";
BasicAWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
return AmazonEC2ClientBuilder.standard()
.withRegion(Regions.AP_NORTHEAST_1)
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.build();
}
public static void getInstanceInfo(String instanceId) {
final AmazonEC2 ec2 = getEc2StandardClient();
DryRunSupportedRequest<DescribeInstancesRequest> dryRequest =
() -> {
DescribeInstancesRequest request = new DescribeInstancesRequest()
.withInstanceIds(instanceId);
return request.getDryRunRequest();
};
DryRunResult<DescribeInstancesRequest> dryResponse = ec2.dryRun(dryRequest);
if(!dryResponse.isSuccessful()) {
System.out.println("Failed to get information of instance " + instanceId);
}
DescribeInstancesRequest request = new DescribeInstancesRequest()
.withInstanceIds(instanceId);
DescribeInstancesResult response = ec2.describeInstances(request);
Reservation reservation = response.getReservations().get(0);
Instance instance = reservation.getInstances().get(0);
System.out.println("Instance id: " + instance.getInstanceId(), ", state: " + instance.getState().getName() +
", public ip: " + instance.getPublicIpAddress() + ", private ip: " + instance.getPrivateIpAddress());
}
It is working fine but I wonder if it's the best practice to get info from a single instance.
but there is only sample code of getting the Amazon EC2 instances for your account and region.
Yes, you may get only instance information you have permission to read.
It is working fine but I wonder if it's the best practice to get info from a single instance
You have multiple options.
For getting EC2 metadata from any client (e.g. from your on-premise network) your code seems ok.
If you are running the code in the AWS environment (on an EC2, lambda, docker, ..) you may specify a service role allowed calling the describeInstances operation from the service. Then you don't need to specify the AWS credentials explicitly (DefaultAWSCredentialsProviderChain will work).
If you are getting the EC2 metadata from the instance itself, you can use the EC2 metadata service.
Related
I have written some code to retrieve my secrets from the AWS Secrets Manager to be used for further processing of other components. In my development environment I configured my credentials using AWS CLI. Once the code was compiled I am able to run it from VS and also from the exe that is generated.
Here is the code to connect to the secrets manager and retrieve the secrets
public static string Get(string secretName)
{
var config = new AmazonSecretsManagerConfig { RegionEndpoint = RegionEndpoint.USWest2 };
IAmazonSecretsManager client = new AmazonSecretsManagerClient(config);
var request = new GetSecretValueRequest
{
SecretId = secretName
};
GetSecretValueResponse response = null;
try
{
response = Task.Run(async () => await client.GetSecretValueAsync(request)).Result;
}
catch (ResourceNotFoundException)
{
Console.WriteLine("The requested secret " + secretName + " was not found");
}
catch (InvalidRequestException e)
{
Console.WriteLine("The request was invalid due to: " + e.Message);
}
catch (InvalidParameterException e)
{
Console.WriteLine("The request had invalid params: " + e.Message);
}
return response?.SecretString;
}
This code pulls credentials from the AWS CLI but when I try to run this code in another PC, it gives an IAM security error as expected, because it cannot figure out what the credentials are to connect to the secret manager.
What would be the best approach to deploy such a configuration in production? Would I need to install and configure AWS CLI in each and every deployment?
If you're deploying the code in AWS you can use IAM Role, with a policy that allows getting secrets from Secret Manager, and attach this role in EC2 or ECS, etc
If you are in a corporate environment with existing authentication infrastructure in place, you probably want to look at identity federation solutions to use with AWS.
I have a lambda which is attempting to make a REST call to an on-prem server outside of AWS. We have the lambda running from a VPC which has a VPN connection to our local resources. The same rest call runs successfully from EC2 with the VPC but the lambda request hangs. The security groups are open. Any ideas how to debug this?
Here is the bulk of the lambda
def lambda_handler(event, context):
config = configparser.ConfigParser()
config.read('config')
pattern = re.compile(".*"+config['DEFAULT']['my-pattern'])
logger.info(event['Records'])
sns_json = event['Records'][0]['Sns']
sns_message = json.loads(sns_json['Message'])
logger.info(sns_message['Records'][0]['s3'])
s3_object = sns_message['Records'][0]['s3']
new_file_name = s3_object['object']['key']
bucket = s3_object['bucket']['name']
if pattern.match(new_file_name):
new_json = {"text": "New file (" + new_file_name + ") added to the bucket. " + bucket,
"title": config['DEFAULT']['default_message_title']}
webhook_post = requests.get("http://some-ip:4500/")
logger.info("Webhook Post Status: " + str(webhook_post.status_code) + str(webhook_post))
logger.info("Skip teams webhook");
outgoing_message_dict = {
's3Bucket': bucket,
'somefile': new_file_name
}
return outgoing_message_dict
I don't receive any errors from the request, it just hangs until my lambda times-out.
I believe I found the source of the problem. Ultimately I believe the issue is with our on-prem firewall. The VPN tunnel wasn't active at all times. Others have mentioned that it needs to be activated from the on-prem network. I created an ec2 instance and connected to it, activating the VPN. What I ran the lambda shortly after, I could successfully reach the local REST endpoint I was trying connect to.
I have not implemented the final solution yet, but from the firewall we should be able to set the connection to have a keep-alive ping so our connection does not time-out. I hope this helps others. Thank you for the feedback!
After creating a Java 8 Elastic Beanstalk instance with RDS, the RDS connection details are not visible as environment variables (they are visible on other instances that are running).
After running printenv command, the expectation was for these values to be available but they are not.
RDS_HOSTNAME=foo.com
RDS_USERNAME=foo
RDS_PASS=bar
These are required by the server config
database:
driverClass: com.mysql.jdbc.Driver
user: ${RDS_USERNAME}
password: ${RDS_PASSWORD}
url: jdbc:mysql://${RDS_HOSTNAME}/${RDS_DB_NAME}
During application firing they are not available, the logs shows a Java exception that it cannot find the environment variables.
io.dropwizard.configuration.UndefinedEnvironmentVariableException: The environment variable 'RDS_USERNAME' is not defined; could not substitute the expression '${RDS_USERNAME}'.
at io.dropwizard.configuration.EnvironmentVariableLookup.lookup(EnvironmentVariableLookup.java:41)
at org.apache.commons.lang3.text.StrSubstitutor.resolveVariable(StrSubstitutor.java:934)
at org.apache.commons.lang3.text.StrSubstitutor.substitute(StrSubstitutor.java:855)
at org.apache.commons.lang3.text.StrSubstitutor.substitute(StrSubstitutor.java:743)
at org.apache.commons.lang3.text.StrSubstitutor.replace(StrSubstitutor.java:403)
at io.dropwizard.configuration.SubstitutingSourceProvider.open(SubstitutingSourceProvider.java:39)
at io.dropwizard.configuration.BaseConfigurationFactory.build(BaseConfigurationFactory.java:83)
at io.dropwizard.cli.ConfiguredCommand.parseConfiguration(ConfiguredCommand.java:124)
at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:72)
at io.dropwizard.cli.Cli.run(Cli.java:75)
at io.dropwizard.Application.run(Application.java:93)
However if I run the following command on the ec2 instance
sudo /opt/elasticbeanstalk/bin/get-config environment
It prints the values out in JSON:
{"CONFIG":"dev.yml","RDS_HOSTNAME":"foo.com","RDS_PASSWORD":"foo","M2":"/usr/local/apache-maven/bin","M2_HOME":"/usr/local/apache-maven","RDS_DB_NAME":"foo","JAVA_HOME":"/usr/lib/jvm/java","RDS_USERNAME":"foo","GRADLE_HOME":"/usr/local/gradle","RDS_PORT":"3306"}
Any ideas how to restore these values for the ec2-user?
I have tried:
Restarting the EB instance
Rebuilding the instance
cat the values into a script that sets them after eb deploy
Any ideas, why they are not visible on this particular instance?
Instance details
Environment details foo: foo-service
Application name: foo-service
Region: eu-west-2
Platform: arn:aws:elasticbeanstalk:eu-west-2::platform/Java 8 running on 64bit Amazon Linux/2.6.0
Tier: WebServer-Standard
Check this out, AWS won't expose the environment variables directly in your OS as we expected:
https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-shell/
I would run the EB environment update and or replace the Instance. Or you can move to "Storing the Connection String in Amazon S3"
When the environment update is complete, the DB instance's hostname
and other connection information are available to your application
through the following environment properties:
RDS_HOSTNAME – The hostname of the DB instance. Amazon RDS console label – Endpoint (this is the hostname)
RDS_PORT – The port on which the DB instance accepts connections. The default value varies between DB engines.
Amazon RDS console label – Port
RDS_DB_NAME – The database name, ebdb.
Amazon RDS console label – DB Name
RDS_USERNAME – The user name that you configured for your database.
Amazon RDS console label – Username
RDS_PASSWORD – The password that you configured for your database.
private static Connection getRemoteConnection() {
if (System.getenv("RDS_HOSTNAME") != null) {
try {
Class.forName("org.postgresql.Driver");
String dbName = System.getenv("RDS_DB_NAME");
String userName = System.getenv("RDS_USERNAME");
String password = System.getenv("RDS_PASSWORD");
String hostname = System.getenv("RDS_HOSTNAME");
String port = System.getenv("RDS_PORT");
String jdbcUrl = "jdbc:postgresql://" + hostname + ":" + port + "/" + dbName + "?user=" + userName + "&password=" + password;
logger.trace("Getting remote connection with connection string from environment variables.");
Connection con = DriverManager.getConnection(jdbcUrl);
logger.info("Remote connection successful.");
return con;
}
catch (ClassNotFoundException e) { logger.warn(e.toString());}
catch (SQLException e) { logger.warn(e.toString());}
}
return null;
}
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-rds.html#java-rds-javase
Can you try to initialize them in /etc/environment? When I need to be sure an environment variable exists I make sure and add it there. All user shells will get that variable set that way.
I have a test Lambda function setup using Java 8
So far I have
given the Lambda function an execution role AWSLambdaVPCAccessExecutionRole
attached the Lambda function to the only VPC I have on my account and selected all subnets within the VPC so that it may access the
my RDS instance in this case is open to the public, and I am able to access it via my Laptop (i.e. the Lambda code actually runs on remote hosts not inside the VPC)
security group assigned to the Lambda is the most permissive possible (i.e all traffic on all CIDR block)
However, I am still unable to access my RDS instance when running the Lambda function on AWS (but the same code works on my laptop, when run from a main() function)
Sample Code
public class Application implements RequestHandler<Object, Boolean> {
private Logger logger = Logger.getLogger(Application.class);
public Boolean handleRequest(Object object, Context context) {
try {
Class.forName("com.mysql.jdbc.Driver");
logger.info("Calling DriverManager.getConnection()");
Connection con = DriverManager.getConnection(
"jdbc:mysql://...endpoint...defaultdb",
"awsops",
"..."
);
Statement stmt = con.createStatement();
logger.info("Test Started!");
ResultSet result = stmt.executeQuery("SELECT\n" +
" last_name, COUNT(*)\n" +
"FROM\n" +
" users\n" +
"GROUP BY\n" +
" last_name\n" +
"HAVING COUNT(*) > 1");
if (result.next()) {
logger.info(result.getString("last_name"));
}
return true;
} catch (Exception e) {
logger.info(e.getLocalizedMessage());
}
return false;
}
}
Can you please help me understand what I could be doing wrong?
The CloudWatch logs shows that the function hangs at DriverManager.getConnection()
It turns out my RDS was launched with an automatically created security group that literally just whitelisted my personal IP address ... so it feels like I was able to connect to it from 'anywhere'
I had to update the security group of my RDS instance to allow traffic from the subnets where the Lambda's virtual network interface could be coming from
I have attached an EC2 role to the instance , to my EC2 instance and I am running my AWS JAVA SDK . When I am trying to load the credentials this way :
InstanceProfileCredentialsProvider instanceCred = new InstanceProfileCredentialsProvider();
I am getting the following error :
Exception in thread "main" com.amazonaws.AmazonClientException: Unable to load credentials.
at com.amazonaws.auth.InstanceProfileCredentialsProvider.loadCredentials(InstanceProfileCredentialsProvider.java:195)
at com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:124)
Can any suggest what I might be missing?
I was able to get around it by running the code from my EC2 instance and enhancing my code , here is the code :
String arn = "arn:aws:iam::"+accountnumber+":role/My-CrossAccount-CustomRole-ReadOnly";
InstanceProfileCredentialsProvider instanceCred = new InstanceProfileCredentialsProvider();
stscred = new STSAssumeRoleSessionCredentialsProvider(instanceCred.getCredentials(),arn,"123",clientConfiguration);
Make sure you have "sts:AssumeRole" in your role actions .