I'm trying to send emails using AWS SES Simple Email Service in my SpringBoot project
Knowing that I verified my email on AWS, and also created an AWS_ACCESS_KEY and AWS_SECRET_KEY.
Here's my code:
public static void amazonSimpleEmailService(String to, String subject, String htmlBody, String textBody) {
AmazonSimpleEmailService client = AmazonSimpleEmailServiceClientBuilder.standard()
.withCredentials(
new AWSStaticCredentialsProvider(new BasicAWSCredentials(AWS_ACCESS_KEY, AWS_SECRET_KEY)))
.withRegion(Regions.EU_WEST_1).build();
SendEmailRequest request = new SendEmailRequest().withDestination(new Destination().withToAddresses(to))
.withMessage(new Message()
.withBody(new Body().withHtml(new Content().withCharset("UTF_8").withData(htmlBody))
.withText(new Content().withCharset("UTF-8").withData(textBody)))
.withSubject(new Content().withCharset("UTF-8").withData(subject)))
.withSource(MAIL_ADMIN);
client.sendEmail(request);
}
When Building my code on Jenkins (EC2 instance) , this error occurs
Caused by: com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper#55fe41ea: The requested metadata is not found at http://169.254.169.254/latest/meta-data/iam/security-credentials/, com.amazonaws.auth.profile.ProfileCredentialsProvider#2aceadd4: profile file cannot be null]
I doubt that am missing something but I cannot find what it is.
Related
I am trying to use my code to access S3 using aws sdk and in a java service. But I receiving following exception.
Code I use to as follows.
configure the builder.
ClientConfiguration clientConfiguration = new ClientConfiguration();
clientConfiguration.setRetryPolicy( PredefinedRetryPolicies.getDefaultRetryPolicyWithCustomMaxRetries(5));
AmazonS3ClientBuilder s3ClientCommonBuilder = AmazonS3ClientBuilder.standard()
.withClientConfiguration(clientConfiguration)
.withForceGlobalBucketAccessEnabled(true)
.withPathStyleAccessEnabled(true);
s3ClientCommonBuilder.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(awsKey, awsSecret)));
return s3ClientCommonBuilder;
create the AwsS3Client
AmazonS3ClientBuilder internalS3ClientBuilder =
createS3ClientCommonBuilder(s3bucket, awsKey, awsSecret);
if (internalServiceEndpoint != null && !internalServiceEndpoint.isEmpty()) {
internalS3ClientBuilder.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(internalServiceEndpoint, Region.EU_Ireland.getFirstRegionId()));
}
And then at run time when the client is attempting to connect to s3 following exception is thrown.
1) Error in custom provider, com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
while locating AwsS3ClientProvider
at org.test.ApiServerBootstrap.configure(ApiServerBootstrap.java:103)
while locating org.test.AWSS3Client
for the 3rd parameter of org.testDataUploader.<init>(DataUploader.java:60)
at org.test.DataUploader.class(DataUploader.java:49)
while locating org.test.DataUploader
for the 3rd parameter of org.test.StoreService.<init>(StoreService.java:56)
at org.test.services.v3.StoreService.class(StoreService.java:53)
while locating org.test.StoreService
for the 1st parameter of org.test.verticles.EventBusConsumerVerticle.<init>
(EventBusConsumerVerticle.java:35)
at
org.test.verticles.EventBusConsumerVerticle.class(EventBusConsumerVerticle.java:35)
while locating org.test.verticles.EventBusConsumerVerticle
for the 3rd parameter of org.test.ApiServer.<init>(ApiServer.java:52)
while locating org.test.ApiServer
1 error
at com.google.inject.internal.InternalProvisionException.toProvisionException(InternalProvisionException.java:226)
at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1053)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1086)
at org.test.ApiServer.main(ApiServer.java:46)
Caused by: com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
at com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:462)
at com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:424)
at com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:46)
at org.test.AWSS3Client.create(AWSS3Client.java:99)
I have tested this with some other computers to verify it is my aws client configuration that is is wrong.
My .aws/config file looks as follows
$ cat .aws/config
[default]
region = eu-west-1
output = json
my .aws/credintials look like follows.
$ cat .aws/credentials
[default]
aws_access_key_id=XXXXXXXX.....
aws_secret_access_key=XXXXXXXX
my aws client verison is as follows.
$ aws --version
aws-cli/2.2.40 Python/3.8.8 Linux/5.11.0-34-generic exe/x86_64.ubuntu.20 prompt/off
I appreciate if some one can figure out what am I missing for the aws client to connect to s3.
You want to make sure you are using the DefaultAWSCredentialsProviderChain() class when building your S3Client.
Also you can create an environment variable AWS_REGION=eu-west-1
I have AWS MSK cluster with 2 broker nodes.
I am not able at all to produce any messages to the brokers always got Local: Message timed out error .
I can access the cluster normally in using AWS CLI
the cluster node doesn't have any authentication just the application should run in a certain AWS VPC
I have tried the simplest example and just replace the bootstrap servers with the AWS Cluster nodes
here is the code
string brokerList = "b-1.8.c2.kafka.eu-central-1.amazonaws.com:9092,b-2.8.c.c2.kafka.eu-central-1.amazonaws.com:9092"; // sample nodes examples
string topicName = "TestTopic";
var config = new ProducerConfig { BootstrapServers = brokerList };
using (var producer = new ProducerBuilder<string, string>(config).Build())
{
Console.WriteLine("\n-----------------------------------------------------------------------");
Console.WriteLine($"Producer {producer.Name} producing on topic {topicName}.");
Console.WriteLine("-----------------------------------------------------------------------");
try
{
var deliveryReport = await producer.ProduceAsync(
topicName, new Message<string, string> { Key = "MyKey", Value = "MyValue" });
Console.WriteLine($"delivered to: {deliveryReport.TopicPartitionOffset}");
}
catch (ProduceException<string, string> e)
{
Console.WriteLine($"failed to deliver message: {e.Message} [{e.Error.Code}]");
}
I have tried both of TLS and Plaintext which are both allowed in the cluster and got the same error message.
Also have change the cluster configuration from auto.create.topics.enable=false to auto.create.topics.enable=true and still the same result.
i am using the below configuration setup
Confluent.Kafka nuget version. - Confluent.Kafka 1.4.3
Apache Kafka version. - 2.4.1
Error Message: Local: Message timed out
I have the problem that i can't send emails from the new aws ses environments, which were introduced a month ago.
All the old ones are working fine (e.g. us-east-1, us-west-2, eu-west-1).
But if I want to send a mail from one of the new environments, e.g. eu-central-1, I just get the error message:
The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
But this can't be the case, because all the old ones are working fine with the same keys.
Therefore I would really appreciate it if sb else could test the sample code with their account to check if they have the same issue.
The new environments are eu-central-1, ap-south-1 and ap-southeast-2. Endpoint Urls
Sample Code:
var ses = require('node-ses');
var client = ses.createClient({ key: '', secret: '', amazon: 'https://email.eu-central-1.amazonaws.com'});
async function sendMessage() {
let options = {};
options.from = "test#aol.com";
options.to = "test2#aol.com";
options.subject = "TestMail";
options.message = "Test";
console.log("Try to sendMessage");
client.sendEmail(options, function (err, data, res) {
console.log("Error: " + JSON.stringify(err));
console.log("Data: " + data);
console.log("res: " + res);
});
}
sendMessage();
The sample code uses the node-ses npm package and you just need to enter aws iam user credentials, which have ses access.
If you want to check different regions, you have to change url in the createClient constructor.
Dont worry, the sample code does not send an email!!!
If the region is working, it should throw an error message similar to this: Email address is not verified. The following identities failed the check in region EU-WEST-1: test#aol.com, test2#aol.com"
Otherwise the error will be the one described above.
I also have to mention that I am currently still in sandbox mode, so maybe the new regions are blocked for sandbox users?
It's because you must be creating the SES credentials from the IAM console . You should instead create the credentials using the SES interface/console.
Follow this article to create smtp credentials using SES interface:
http://docs.amazonwebservices.com/ses/latest/GettingStartedGuide/GetAccessIDs.html.
I am using IdentityServer4 with .NET Core 2.0 on AWS's ElasticBeanstalk. I have a certificate for signing tokens. What's the best way to store this certificate and retrieve it from the application? Should I just stick it with the application files? Throw it in an environment variable somehow?
Edit: just to be clear, this is a token signing certificate, not an SSL certificate.
I don't really like the term 'token signing certificate' because it sounds so benign. What you have is a private key (as part of the certificate), and everyone knows you should secure your private keys!
I wouldn't store this in your application files. If someone gets your source code, they shouldn't also get the keys to your sensitive data (if someone has your signing cert, they can generate any token they like and pretend to be any of your users).
I would consider storing the certificate in AWS parameter store. You could paste the certificate into a parameter, which can be encrypted at rest. You then lock down the parameter with an AWS policy so only admins and the application can get the cert - your naughty Devs dont need it! Your application would pull the parameter string when needed and turn it into your certificate object.
This is how I store secrets in my application. I can provide more examples/details if required.
Edit -- This was the final result from Stu's guidance
The project needs 2 AWS packages from Nuget to the project
AWSSDK.Extensions.NETCORE.Setup
AWSSDK.SimpleSystemsManagement
Create 2 parameters in the AWS SSM Parameter Store like:
A plain string named /MyApp/Staging/SigningCertificate and the value is a Base64 encoded .pfx file
An encrypted string /MyApp/Staging/SigningCertificateSecret and the value is the password to the above .pfx file
This is the relevant code:
// In Startup class
private X509Certificate2 GetSigningCertificate()
{
// Configuration is the IConfiguration built by the WebHost in my Program.cs and injected into the Startup constructor
var awsOptions = Configuration.GetAWSOptions();
var ssmClient = awsOptions.CreateServiceClient<IAmazonSimpleSystemsManagement>();
// This is blocking because this is called during synchronous startup operations of the WebHost-- Startup.ConfigureServices()
var res = ssmClient.GetParametersByPathAsync(new Amazon.SimpleSystemsManagement.Model.GetParametersByPathRequest()
{
Path = "/MyApp/Staging",
WithDecryption = true
}).GetAwaiter().GetResult();
// Decode the certificate
var base64EncodedCert = res.Parameters.Find(p => p.Name == "/MyApp/Staging/SigningCertificate")?.Value;
var certificatePassword = res.Parameters.Find(p => p.Name == "/MyApp/Staging/SigningCertificateSecret")?.Value;
byte[] decodedPfxBytes = Convert.FromBase64String(base64EncodedCert);
return new X509Certificate2(decodedPfxBytes, certificatePassword);
}
public void ConfigureServices(IServiceCollection servies)
{
// ...
var identityServerBuilder = services.AddIdentityServer();
var signingCertificate = GetSigningCertificate();
identityServerBuilder.AddSigningCredential(signingCertificate);
//...
}
Last, you may need to set an IAM role and/or policy to your EC2 instance(s) that gives access to these SSM parameters.
Edit: I have been moving my web application SSL termination from my load balancer to my elastic beanstalk instance this week. This requires storing my private key in S3. Details from AWS here: Storing Private Keys Securely in Amazon S3
I am trying to automate the process of creating application version for an existing elastic beanstalk application through java api and command line arguments.
while implementing createApplicationVersion() of AWSElasticBeanstalkClient I am getting error for the below code snipplet.
Note: I am passing the endpoint for AWSElasticBeanstalkClient as US East-1 (N.Virginia) or the environment url for the existing application.
ArrayList<String> s3SourceBundleList = AmazonS3BucketUploadApp.doBucketUploadFromLocal(sourceLocation);
String bucketName = s3SourceBundleList.get(0);
String keyName = java.net.URLEncoder.encode(s3SourceBundleList.get(1), "UTF-8");
//String keyName = s3SourceBundleList.get(1);
S3Location s3SourceBundle = new S3Location();
s3SourceBundle.setS3Bucket(bucketName);
s3SourceBundle.setS3Key(keyName);
createApplicationVersionRequest.setSourceBundle(s3SourceBundle);
createApplicationVersionRequest.setDescription("New version");
appVersionResultObject = awsBeanstalkclient.createApplicationVersion(createApplicationVersionRequest);
Error:
com.amazonaws.AmazonClientException: Unable to unmarshall response (ParseError at [row,col]:[6,1]
and one more error is
AWS service: AmazonElasticBeanstalk AWS Request ID: null AWS service unavailable.
Please suggest any solution for this.
How are you initializing the client (check logs output - enabling logger org.apache.http.wire to TRACE could help)?
If you want an idea, peek at this source:
https://github.com/jenkinsci/awseb-deployment-plugin/blob/master/src/main/java/br/com/ingenieux/jenkins/plugins/awsebdeployment/Deployer.java
It contains all you need to build and deploy into AWS EB :)