I am trying to sync my data from MongoDB in to Amazon Elasticsearch Service using mongoosastic For some reason, sync is not happening as expected.
I do not see any error from the mongoosastic plugin. Not sure what is failing in AWS Elasticsearch service
Is there any way to get Elasticsearch logs in Amazon Elasticsearch Service?
elasticsearch = require('elasticsearch');
this.es_connection = new elasticsearch.Client("Amazon Elasticsearch Service address");
return this.es_connection.ping({
requestTimeout: 30000,
hello: 'elasticsearch'
}, function(error) {
if (error) {
console.error('elasticsearch cluster is down!' + JSON.stringify(error));
} else {
logger.info('All is well in elasticsearch');
}
});
In order to troubleshoot AWS elasticsearch service, you'll need to configure the log shipping to CloudWatch:
https://aws.amazon.com/blogs/big-data/viewing-amazon-elasticsearch-service-error-logs/
Then you will be able to use CloudWatch console in order to view your logs and understand if the issue is in Elasticsearch or is it mongoosastic issue / mapping/ index failures issues.
Related
Cloud: AWS
ES Version: 7.4
Error message while searching:
{"error":{"message":"[too_many_buckets_exception] Trying to create too many buckets. Must be less than or equal to: [110000] but was [110001]. This limit can be set by changing the [search.max_buckets] cluster level setting., with { max_buckets=110000 }"}}
The question is where/how can I set this property from AWS console OR AWS cli.
The AWS console
Regards
Amit Meena
I have written some code to retrieve my secrets from the AWS Secrets Manager to be used for further processing of other components. In my development environment I configured my credentials using AWS CLI. Once the code was compiled I am able to run it from VS and also from the exe that is generated.
Here is the code to connect to the secrets manager and retrieve the secrets
public static string Get(string secretName)
{
var config = new AmazonSecretsManagerConfig { RegionEndpoint = RegionEndpoint.USWest2 };
IAmazonSecretsManager client = new AmazonSecretsManagerClient(config);
var request = new GetSecretValueRequest
{
SecretId = secretName
};
GetSecretValueResponse response = null;
try
{
response = Task.Run(async () => await client.GetSecretValueAsync(request)).Result;
}
catch (ResourceNotFoundException)
{
Console.WriteLine("The requested secret " + secretName + " was not found");
}
catch (InvalidRequestException e)
{
Console.WriteLine("The request was invalid due to: " + e.Message);
}
catch (InvalidParameterException e)
{
Console.WriteLine("The request had invalid params: " + e.Message);
}
return response?.SecretString;
}
This code pulls credentials from the AWS CLI but when I try to run this code in another PC, it gives an IAM security error as expected, because it cannot figure out what the credentials are to connect to the secret manager.
What would be the best approach to deploy such a configuration in production? Would I need to install and configure AWS CLI in each and every deployment?
If you're deploying the code in AWS you can use IAM Role, with a policy that allows getting secrets from Secret Manager, and attach this role in EC2 or ECS, etc
If you are in a corporate environment with existing authentication infrastructure in place, you probably want to look at identity federation solutions to use with AWS.
we have an AWS cloudformation template, through which we are creating the Amazon MSK(Kafka) cluster. which is working fine.
Now we have multiple applications in our product stack which consume the Brokers endpoints which is created by the Amazon MSK. now to automate the product deployment we decided to create a Route53 recordset for the MSK broker endpoints. we are having hard time finding how we can get a broker endpoints of MSK cluster as an Outputs in AWS Cloudformation templates.
looking forward for suggestion/guidance on this.
Following on #joinEffort answer, this how i did it using custom resources as the CFN resource for an MKS::Cluster does not expose the broker URL:
(option 2 is using boto3 and calling AWS API
The description of the classes and mothods to use from CDK custom resource code can be found here:
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Kafka.html#getBootstrapBrokers-property
Option 1: Using custom resource:
def get_bootstrap_servers(self):
create_params = {
"ClusterArn": self._platform_msk_cluster_arn
}
get_bootstrap_brokers = custom_resources.AwsSdkCall(
service='Kafka',
action='getBootstrapBrokers',
region='ap-southeast-2',
physical_resource_id=custom_resources.PhysicalResourceId.of(f'connector-{self._environment_name}'),
parameters = create_params
)
create_update_custom_plugin = custom_resources.AwsCustomResource(self,
'getBootstrapBrokers',
on_create=get_bootstrap_brokers,
on_update=get_bootstrap_brokers,
policy=custom_resources.AwsCustomResourcePolicy.from_sdk_calls(resources=custom_resources.AwsCustomResourcePolicy.ANY_RESOURCE)
)
return create_update_custom_plugin.get_response_field('BootstrapBrokerString')
Option 2: Using boto3:
client = boto3.client('kafka', region_name='ap-southeast-2')
response = client.get_bootstrap_brokers(
ClusterArn='xxx')
#From here u can get the broker urls:
json_response = json.loads(json.dumps(response))
Reff: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/kafka.html#Kafka.Client.get_bootstrap_brokers
You should be able to get it from below command. More can be found here.
aws kafka get-bootstrap-brokers --region us-east-1 --cluster-arn ClusterArn
I've set up an Aurora PostgreSQL compatible database. I can connect to the database via the public address but I'm not able to connect via a Lambda function which is placed in the same VPC.
This is just a test environment and the security settings are weak. In the network settings I tried to use "no VPC" and I tried my default VPC where the database and the lambda are placed. But this doesn't make a difference.
This is my nodejs code to create a simple Select statement:
var params = {
awsSecretStoreArn: '{mySecurityARN}',
dbClusterOrInstanceArn: 'myDB-ARN',
sqlStatements: 'SELECT * FROM userrole',
database: 'test',
schema: 'user'
};
const aurora = new AWS.RDSDataService();
let userrightData = await aurora.executeSql(params).promise();
When I start my test in the AWS GUI I get following errors:
"errorType": "UnknownEndpoint",
"errorMessage": "Inaccessible host: `rds-data.eu-central- 1.amazonaws.com'. This service may not be available in the `eu-central-1' region.",
"trace": [
"UnknownEndpoint: Inaccessible host: `rds-data.eu-central-1.amazonaws.com'. This service may not be available in the `eu-central-1' region.",
I've already checked the tutorial of Amazon but I can't find a point I did't try.
The error message "This service may not be available in the `eu-central-1' region." is absolutely right because in eu-central-1 an Aurora Serverless database is not available.
I configured an Aurora Postgres and not an Aurora Serverless DB.
"AWS.RDSDataService()", what I wanted to use to connect to the RDB, is only available in relation to an Aurora Serverless instance. It's described here: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/RDSDataService.html.
This is the reason why this error message appeared.
I have been reading various articles/docs and watching some videos on this topic. My issue is that they all conflict in one way or another.
My goal is to use winston to send all console.logs/error messages from my ec2 server to Cloudwatch so that no logs are ever logged on the ec2 terminal itself.
Points of confusion:
If I use winston-aws-cloudwatch or winston-cloudwatch, do I still need to setup an IAM user on AWS or will these auto generate logs within Cloudwatch?
If I setup Cloudwatch as per AWS documentation will that automatically stream any would be console.logs from the EC2 server to Cloudwatch or will it do both? If the first one, then I don't need Winston?
Can I send logs from my local development server to Cloudwatch (just for testing purposes, as soon as it is clear it works, then I would test on staging and finally move it to production) or must it come from an EC2 instance?
I assume the AWS Cloudwatch key is the same as the AWS key I use for the rest of my account?
Present code:
var winston = require('winston'),
CloudWatchTransport = require('winston-aws-cloudwatch');
const logger = new winston.Logger({
transports: [
new (winston.transports.Console)({
timestamp: true,
colorize: true
})
]
});
const cloudwatchConfig = {
logGroupName: 'groupName',
logStreamName: 'streamName',
createLogGroup: false,
createLogStream: true,
awsConfig: {
aws_access_key_id: process.env.AWS_KEY_I_USE_FOR_AWS,
aws_secret_access_key: process.env.AWS_SECRET_KEY_I_USE_FOR_AWS,
region: process.env.REGION_CLOUDWATCH_IS_IN
},
formatLog: function (item) {
return item.level + ': ' + item.message + ' ' + JSON.stringify(item.meta)
}
};
logger.level = 3;
if (process.env.NODE_ENV === 'development') logger.add(CloudWatchTransport, cloudwatchConfig);
logger.stream = {
write: function(message, encoding) {
logger.info(message);
}
};
logger.error('Test log');
Yes
Depends on the transports you configure. If you configure only CloudWatch than it will only end up there. Currently your code has 2 transports, the normal Console one and the CloudWatchTransport so with your current code, both.
As long as you specify your keys as you would normally do with any AWS service (S3, DB, ...) you can push logs from your local/dev device to CloudWatch.
Depends on your IAM user if he has the privileges or not. But it is possible yes.