I would like to send windows events to AWS elastic search. The elasticsearch has api key and security key which is needed to connect. I cant find in winlog beat configuration. please find my yml code below.
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]
hosts: ["https://vpc-manufacturing-elasticsearch-celm5zj5gcf45hpghulnxshco4.ap-southeast-2.es.amazonaws.com"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "changeme"
region:"where to specify"
aws_access_key_id:"where to specify"
aws_secret_access_key:"where to specify"
#User: es-mfg
Beats don't support AWS Authentication. Your options are:
Set up Fine-Grained Access Control in Amazon Elasticsearch Service and enable basic auth and proceed with elasticsearch output.
For IAM-based domain access policy set up Logstash, install logstash-output-amazon-es plugin and properly set your access credentials. Finally, configure logstash output in your beat pointing to this logstash instance.
Related
I am trying to add Airflow connection for GCP cloud(SA key should be fetched from secret manager) but in my Airflow UI(version 2.1.4) i couldnt find option for adding by using secret manager. is it because of version problem?
enter image description here
if so can we add the airflow connection (by using secret manager) via command line(gcloud) or via programmatically to add it
I tried via command line but it throws below error:
gcloud composer environments run project_id --location europe-west2 connections add -- edw_test --conn-type=google_cloud_platform --conn-extra '{"extra__google_cloud_platform__project": "proejct", "extra__google_cloud_platform__key_secret_name": "test_edw","extra__google_cloud_platform__scope": "https://www.googleapis.com/auth/cloud-platform"}'
kubeconfig entry generated for europe-west2--902058d8-gke.
Unable to connect to the server: dial tcp 172.16.10.2:443: i/o timeout
ERROR: (gcloud.composer.environments.run) kubectl returned non-zero status code.
I have upgraded both composer and airflow version which paved the way for creating the airflow connection by keeping the keys in secret manager
You can do this by configuring airflow to use Secret Manager as a secrets backend. For this to work, however, the service account you use to access the backend needs to have permission to access secrets.
Secrets Backend
For example, you can set the value directly in airflow.cfg:
[secrets]
backend = airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
Via environment variable:
export AIRFLOW__SECRETS__BACKEND=airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
Creating Connection
Then you can create a secret directly in Secret Manager. If you have configured your Airflow instance to use Secret Manager as the secrets backend, it will pick up any secrets that have the correct prefix.
The default prefixes are:
airflow-connections
airflow-variables
airflow-config
In your case, you would create a secret named airflow-connections-edw_test, and set the value to google-cloud-platform://?extra__google_cloud_platform__project=project&extra__google_cloud_platform__key_secret_name=test_edw&extra__google_cloud_platform__scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform
Note that the parameters have to be url encoded.
More info:
https://airflow.apache.org/docs/apache-airflow-providers-google/stable/secrets-backends/google-cloud-secret-manager-backend.html#enabling-the-secret-backend
https://airflow.apache.org/docs/apache-airflow-providers-google/stable/connections/gcp.html
I am trying to connect to AWS neptune DB after enabling IAM DB authorisation and it is not able to connect and failing with below error.
{"code":"AccessDeniedException","requestId":"68bbc87a-cbf6-31d3-5829-91f32062239f","detailedMessage":"Missing Authentication Token"}
However Its working fine with disabling IAM DB authorisation.
I have created a policy (using link https://docs.aws.amazon.com/neptune/latest/userguide/iam-auth-policy.html) to connect to neptune DB and attached the policy to IAM role which is being added to the ec2 instance. I can able to telnet to neptune DB endpoint with 8182 port.
Can someone please help.
When IAM authentication is enabled, requests to the HTTP endpoint must be signed using SigV4. You can use a tool like awscurl to do this.
Here is an example from the Amazon Neptune documentation that I have modified slightly to have it point to the /status endpoint.
Set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables correctly (and
also AWS_SECURITY_TOKEN if you are using temporary credential). You can also pass these as parameters to awscurl. Then use a command such as (change the region to be your region).
awscurl -X GET --service neptune-db --region us-west-2 "$SYSTEM_ENDPOINT/status"
You can get temporary credentials using sts via the AWS CLI tools as follows:
aws sts get-session-token
If you are running on an EC2 instance you can get the tokens from the metadata service so long as the EC2 instance has a role attached that has access to Neptune. More details here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
I have k8s clusters on AWS working with ECR and pulling images from all regions. This works fine.
But when I try to pull images from a different account they get "no such host". I followed these instructions to set iam permissions (and the docs). I'm not getting permission denied - I'm getting this:
Failed to pull image "<acc id>.dkr.ecr.ap-outheast-2.amazonaws.com/image:tag":
rpc error: code = Unknown desc = Error response from daemon:
Get https://<acc id>.dkr.ecr.ap-outheast-2.amazonaws.com/v1/_ping:
dial tcp: lookup <acc id>.dkr.ecr.ap-outheast-2.amazonaws.com
on 10.71.0.2:53: no such host
My cluster is running in ap-southeast-1 and the IP 10.71.0.2:53 is the default DNS AWS set for the VPC
I'm trying to wok around this by populating this region's ECR as well. But it seems pretty wrong.
Any idea how to allow ECR to pull from another region?
I think you made simple typo in .dkr.ecr.ap-outheast-2.amazonaws.com/image:tag - that's why you receive no such host from DNS server, just try to replace ap-outheast-2 with ap-southeast-2.
Generally if you set ECR IAM right that should work as ECR is accessible/routable as public service in Internet with limitations based on IAM.
I have a spring boot project that runs normally on localhost. But when I upload the WAR on AWS using ElasticBeansTalk, I have a 404 not found.
The access to DynamoDB works fine from the CLI.
The variables on the properties file are the same as the one I am using to access DynamoDB from the CLI
properties file:
amazon.dynamodb.endpoint=http://dynamodb.us-west-2.amazonaws.com
amazon.aws.accesskey=***
amazon.aws.secretkey=***
CLI:
aws configure
AWS Access Key ID [********************]:
AWS Secret Access Key [********************]:
Default region name [us-west-2]:
Default output format [json]:
I don't know why I am having the 404 not found on AWS
AWS is set to listen to port 5000 by default, so you just need to set server.port=5000 in your application.properties file before creating your WAR. Or you can set an SERVER_PORT environment property to change it from 5000 to 8080 that Spring uses.
Another issue I ran into was creating the environment using Java, when you should actually be using Tomcat.
Great step by step here and extra context here.
I used Ansible to create a gce cluster following the guideline at: https://docs.ansible.com/ansible/latest/scenario_guides/guide_gce.html
And at the end of the GCE creations, I used the add_host Ansible module to register all instances in their corresponding groups. e.g. gce_master_ip
But then when I try to run the following tasks after the creation task, they would not work:
- name: Create redis on the master
hosts: gce_master_ip
connection: ssh
become: True
gather_facts: True
vars_files:
- gcp_vars/secrets/auth.yml
- gcp_vars/machines.yml
roles:
- { role: redis, tags: ["redis"] }
Within the auth.yml file I already provided the service account email, path to the json credential file and the project id. But apparently that's not enough. I got errors like below:
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey).\r\n", "unreachable": true}
This a typical ssh username and credentials not permitted or not provided. In this case I would say I did not setup anything of the username and private key for the ssh connection that Ansible will use to do the connecting.
Is there anything I should do to make sure the corresponding credentials are provided to establish the connection?
During my search I think one question just briefly mentioned that you could use the gcloud compute ssh... command. But is there a way I could specify in Ansible to not using the classic ssh and use the gcloud one?
To have Ansible SSH into a GCE instance, you'll have to supply an SSH username and private key which corresponds to the the SSH configuration available on the instance.
So the question is: If you've just used the gcp_compute_instance Ansible module to create a fresh GCE instance, is there a convenient way to configure SSH on the instance without having to manually connect to the instance and do it yourself?
For this purpose, GCP provides a couple of ways to automate and manage key distribution for GCE instances.
For example, you could use the OS Login feature. To use OS Login with Ansible:
When creating the instance using Ansible, Enable OS Login on the target instance by setting the "enable-oslogin" metadata field to "TRUE" via the metadata parameter.
Make sure the Service Account attached to the instance that runs Ansible have both the roles/iam.serviceAccountUser and roles/compute.osLoginAdmin permissions.
Either generate a new or choose an existing SSH keypair that will be deployed to the target instance.
Upload the public key for use with OS Login: This can be done via gcloud compute os-login ssh-keys add --key-file [KEY_FILE_PATH] --ttl [EXPIRE_TIME] (where --ttl specifies how long you want this public key to be usable - for example, --ttl 1d will make it expire after 1 day)
Configure Ansible to use the Service Account's user name and the private key which corresponds to the public key uploaded via the gcloud command. For example by overriding the ansible_user and ansible_ssh_private_key_file inventory parameters, or by passing --private-key and --user parameters to ansible-playbook.
The service account username is the username value returned by the gcloud command above.
Also, if you want to automatically set the enable-oslogin metadata field to "TRUE" across all instances in your GCP project, you can simply add a project-wide metadata entry. This can be done in the Cloud Console under "Compute Engine > Metadata".