404 not found on AWS - amazon-web-services

I have a spring boot project that runs normally on localhost. But when I upload the WAR on AWS using ElasticBeansTalk, I have a 404 not found.
The access to DynamoDB works fine from the CLI.
The variables on the properties file are the same as the one I am using to access DynamoDB from the CLI
properties file:
amazon.dynamodb.endpoint=http://dynamodb.us-west-2.amazonaws.com
amazon.aws.accesskey=***
amazon.aws.secretkey=***
CLI:
aws configure
AWS Access Key ID [********************]:
AWS Secret Access Key [********************]:
Default region name [us-west-2]:
Default output format [json]:
I don't know why I am having the 404 not found on AWS

AWS is set to listen to port 5000 by default, so you just need to set server.port=5000 in your application.properties file before creating your WAR. Or you can set an SERVER_PORT environment property to change it from 5000 to 8080 that Spring uses.
Another issue I ran into was creating the environment using Java, when you should actually be using Tomcat.
Great step by step here and extra context here.

Related

GCP Airflow connection by using secret manager

I am trying to add Airflow connection for GCP cloud(SA key should be fetched from secret manager) but in my Airflow UI(version 2.1.4) i couldnt find option for adding by using secret manager. is it because of version problem?
enter image description here
if so can we add the airflow connection (by using secret manager) via command line(gcloud) or via programmatically to add it
I tried via command line but it throws below error:
gcloud composer environments run project_id --location europe-west2 connections add -- edw_test --conn-type=google_cloud_platform --conn-extra '{"extra__google_cloud_platform__project": "proejct", "extra__google_cloud_platform__key_secret_name": "test_edw","extra__google_cloud_platform__scope": "https://www.googleapis.com/auth/cloud-platform"}'
kubeconfig entry generated for europe-west2--902058d8-gke.
Unable to connect to the server: dial tcp 172.16.10.2:443: i/o timeout
ERROR: (gcloud.composer.environments.run) kubectl returned non-zero status code.
I have upgraded both composer and airflow version which paved the way for creating the airflow connection by keeping the keys in secret manager
You can do this by configuring airflow to use Secret Manager as a secrets backend. For this to work, however, the service account you use to access the backend needs to have permission to access secrets.
Secrets Backend
For example, you can set the value directly in airflow.cfg:
[secrets]
backend = airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
Via environment variable:
export AIRFLOW__SECRETS__BACKEND=airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
Creating Connection
Then you can create a secret directly in Secret Manager. If you have configured your Airflow instance to use Secret Manager as the secrets backend, it will pick up any secrets that have the correct prefix.
The default prefixes are:
airflow-connections
airflow-variables
airflow-config
In your case, you would create a secret named airflow-connections-edw_test, and set the value to google-cloud-platform://?extra__google_cloud_platform__project=project&extra__google_cloud_platform__key_secret_name=test_edw&extra__google_cloud_platform__scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform
Note that the parameters have to be url encoded.
More info:
https://airflow.apache.org/docs/apache-airflow-providers-google/stable/secrets-backends/google-cloud-secret-manager-backend.html#enabling-the-secret-backend
https://airflow.apache.org/docs/apache-airflow-providers-google/stable/connections/gcp.html

How to configure aws credentials to setup cloudwatch with fluentbit

I need to send logs to cloudwatch using fluentbit, from the application hosted on my local system, but i am unable to configure the aws credentials for fluent bit to send logs to cloudwatch.
It will be of great help if anyone can help me with the same.
Some of the logs are as follows:-
[aws_credentials] Initialized Env Provider in standard chain
[aws_credentials] Failed to initialized profile provider: $HOME not set and AWS_SHARED_CREDENTIALS_FILE not set.
[aws_credentials] Not initializing EKS provider because AWS_ROLE_ARN was not set
[aws_credentials] Initialized EC2 Provider in standard chain
[aws_credentials] Not initializing ECS Provider because AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is not set
[aws_credentials] Sync called on the EC2 provider
[aws_credentials] Init called on the env provider
[aws_credentials] Init called on the EC2 IMDS provider
[aws_credentials] requesting credentials from EC2 IMDS
Any standard way to pass credentials should work here:
export environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
or create ~/.aws/credentials, may use aws configure prompt for this.
During my testing the variable weren't enough same for the credentials file.
I installed AWS cli configure it with the keys and now it works as expected. I am working with containers and the AWS cli add extra size that I don't need so if anyone knows a way to do it without it. that would be awesome.

How to configure Winlogbeat to connect to AWS elastisearch

I would like to send windows events to AWS elastic search. The elasticsearch has api key and security key which is needed to connect. I cant find in winlog beat configuration. please find my yml code below.
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]
hosts: ["https://vpc-manufacturing-elasticsearch-celm5zj5gcf45hpghulnxshco4.ap-southeast-2.es.amazonaws.com"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "changeme"
region:"where to specify"
aws_access_key_id:"where to specify"
aws_secret_access_key:"where to specify"
#User: es-mfg
Beats don't support AWS Authentication. Your options are:
Set up Fine-Grained Access Control in Amazon Elasticsearch Service and enable basic auth and proceed with elasticsearch output.
For IAM-based domain access policy set up Logstash, install logstash-output-amazon-es plugin and properly set your access credentials. Finally, configure logstash output in your beat pointing to this logstash instance.

Django (django-ses-gateway) gives default region as EU-WEST-1 instead of US-EAST-1

I am having application on EC2 that requires to send an email.
I am using Django with AWS, and module of 'django-ses-gateway' to send an email.
EC2 is configured, hence on ~/.aws folder I am having appropriate credentials file with region as 'default'
However, whenever application tries to send an email by default it is trying to use "EU-WEST-1" region which is not expected one, as it should use "US-EAST-1".
Because of wrong region, application fails.
PS:
I also verified that "settings.py" file is not overwriting region,
Finally, got the solution.
'django_ses_gateway' (version 0.1.1) module of python has a bug.
By default, it selects EU-WEST-1 region,
hence, 'sending_mail.py' file requires correction to not to hard-cord a region of EU.
The location of installed package can be found using 'pip3 show django-ses-gateway' command

How to configure Spark running in local-mode on Amazon EC2 to use the IAM rules for S3

I'm running Spark2 in local mode on a Amazon EC2, when I'm trying to read data from S3 I'm getting the following exception:
java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively)
I can, but I rather not manually set the AccessKey and the SecretKey from the code because of security issues.
The EC2 is set with an IAM rule that allow it full access to the relevant S3 Bucket. For every other Amazon API calls it is sufficient but it seems that the spark is ignoring it.
Can I set the spark to use this IAM rule instead of the AccessKey and the SecretKey?
Switch to using the s3a:// scheme (with the Hadoop 2.7.x JARs on your classpath) and this happens automatically. The "s3://" scheme with non-EMR versions of spark/hadoop is not the connector you want (it's old, non-interoperable and has been removed from recent versions)
I am using hadoop-2.8.0 and spark-2.2.0-bin-hadoop2.7.
Spark-S3-IAM integration is working well with the following AWS packages on driver.
spark-submit --packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.3 ...
Scala codes snippet:
sc.textFile("s3a://.../file.gz").count()