Let AWS SQS run a php script via AWS SDK - amazon-web-services

I want to send a message to a queue using the AWS SDK. The instances of my worker environment should accept and execute the message. Problem here is that I don't know how to pass the command to run a particular php script.
$params = [
'MessageAttributes' => [
"name" => [
"DataType" => "String",
"StringValue" => "test"
],
"url" => [
"DataType" => "String",
"StringValue" => "/jobs/send_test_email.php"
],
],
'MessageBody' => "Test",
'QueueUrl' => $queueUrl
];
I think I have to pass the command via the MessageAttribute, but I can't find any other examples than "Hello World"

Related

AWS ElasticSearch Logstash 403 Forbidden Access

MySQL and elasticsearch are hosted on aws. And the logstash is running on an ec2. I am not using a VPC.I can connect to MySQL locally or on my ec2.
Modifying question a bit. Here is the new logstash file in my ec2. My es2 instance is not SSL certified, is that a problem?
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://aws.xxxxx.us-east-1.rds.amazonaws.com:3306/stuffed?user=admin&password=pword"
jdbc_user => "admin"
jdbc_password => "pword"
schedule => "* * * * *"
jdbc_validate_connection => true
jdbc_driver_library => "mysql-connector-java-8.0.19.jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
statement => "SELECT * from foo"
type => "foo"
tags => ["foo"]
}
jdbc {
jdbc_connection_string => "jdbc:mysql://aws.xxxxx.us-east-1.rds.amazonaws.com:3306/stuffed?user=admin&password=pword"
jdbc_user => "admin"
jdbc_password => "pword"
schedule => "* * * * *"
jdbc_validate_connection => true
jdbc_driver_library => "mysql-connector-java-8.0.19.jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
statement => "SELECT * from cat"
type => "cat"
tags => ["cat"]
}
jdbc {
jdbc_connection_string => "jdbc:mysql://aws.xxxxx.us-east-1.rds.amazonaws.com:3306/stuffed?user=admin&password=pword"
jdbc_user => "admin"
jdbc_password => "pword"
schedule => "* * * * *"
jdbc_validate_connection => true
jdbc_driver_library => "mysql-connector-java-8.0.19.jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
statement => "SELECT * from rest"
type => "rest"
tags => ["rest"]
}
}
output {
stdout { codec => json_lines }
if "foo" in [tags] {
amazon_es {
hosts => ["https://es1.stuffed-es-mysql.us-east-1.es.amazonaws.com"]
index => "foo"
region => "us-east-1"
aws_access_key_id => "id"
aws_secret_access_key => "key"
document_type => "foo-%{+YYYY.MM.dd}"
}
}
if "cat" in [tags] {
amazon_es {
hosts => ["https://es1.stuffed-es-mysql.us-east-1.es.amazonaws.com"]
index => "cat"
region => "us-east-1"
aws_access_key_id => "id"
aws_secret_access_key => "key"
document_type => "cat-%{+YYYY.MM.dd}"
}
}
if "rest" in [tags] {
amazon_es {
hosts => ["https://es1.stuffed-es-mysql.us-east-1.es.amazonaws.com"]
index => "rest"
region => "us-east-1"
aws_access_key_id => "id"
aws_secret_access_key => "key"
document_type => "rest-%{+YYYY.MM.dd}"
}
}
}
Now the issue I'm getting is a 403 forbidden error.
I did create a user in AWS with AmazonESFullAccess (AWS managed policy) permission. I'm not sure what else to do anymore. I am not using a VPC, I was trying to avoid that. So I want to stick with public access.
I tried creating new ElasticSearch Service based off this guide: https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-gsg.html
but I get an error error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://user:xxxxxx#mysql-abcdefghijkl.us-east-1.es.amazonaws.com:9200/]
and the logstash ouput for this instance is:
elasticsearch {
hosts => ["https://es-mysql.us-east-1.es.amazonaws.com/"]
index => "category"
user => "user"
password => "password"
document_type => "cat-%{+YYYY.MM.dd}"
}
Obviously this isn't the preferred method, but I'm really just trying to setup a dev/personal environment.
Also, I am able to login into Kibana with this instance.
ACCESS POLICY
For the first elastic search service:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111:user/root"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:11111111:domain/stuffed-es-mysql/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:166216189490:domain/stuffed-es-mysql/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"11.11.11.111",
"111.111.1.111",
"111.111.1.1",
"1.11.111.111",
]
}
}
}
]
}
For the second ElasticSearch Service I created:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:xxxxxxxxx:domain/es-mysql/*"
}
]
}
SO I resolved the two issues.
Problem one: I was still having a problem with my logstash input connection to my RDS (which wasn't the problem posted but I thought I'd share anyway) :
Problem:
The RDS MySQL instance was the wrong version. It was set to the recommended value by AWS (5.7) and I need the latest 8.0.*. This was causing a problem with jdbc not working correctly.
Solution:
Update RDS MySQL instance to 8.0.17. Now my logstash was able to read the input from MySQL RDS.
Problem two: The output of my logstash was not working.
Problem:
Receiving a 403 forbidden error.
Solution:
Removed the https requirement when setting up the ES service. This most likely worked for me because my ec2 is not ssl certified.
The solution to problem two was questioned before in my post. And being new to logstash setup on an ec2, and elasticsearch service (AWS) I was not going with my instincts. But, I removed the https requirement when setting up the ES service. Yes not the greatest idea, but this a dev environment. This fixed the issue. Why? Because my ec2 service is not ssl certified. This makes sense. One of the common problems you get with a 403 error is because the source is sending a request to a SSL certified destination.
I think this is a common problem for people who want to run something on a budget and it's overlooked. Most people want to jump to your Access Policy, IP Policy or Security group and that's understandable.

Pulumi: how to create a CloudWatch event rule for a repository

I am trying to capture PutImage event from a specific ECR repository using Cloudwatch to trigger a Lambda.
My problem is with eventPattern being typed as 'string':
export const myTestRepo = ECRTemplate('my-test-repo');
export const eventRule = new aws.cloudwatch.EventRule("putimagerule", {
eventPattern: JSON.stringify({
"detail-type": [
"AWS API Call via CloudTrail"
],
"source": ["aws.ecr"],
"detail": {
"eventName": ["PutImage"],
"repositoryName": [myTestRepo.repository.name]
}
}),
});
and a resulting event rule looks like this:
{
"detail":{
"eventName":[
"PutImage"
],
"repositoryName":[
"Calling [toJSON] on an [Output\u003cT\u003e] is not supported.\n\nTo get the value of an Output as a JSON value or JSON string consider either:\n 1: o.apply(v =\u003e v.toJSON())\n 2: o.apply(v =\u003e JSON.stringify(v))\n\nSee https://pulumi.io/help/outputs for more details.\nThis function may throw in a future version of #pulumi/pulumi."
]
},
"detail-type":[
"AWS API Call via CloudTrail"
],
"source":[
"aws.ecr"
]
}
Object myTestRepo contains a valid Repository and is not a part of the problem that why it is not included here.
Q: How to catch PutImage for a specific repository?
The problem is caused by the type of myTestRepo.repository.name: it's not a string but a pulumi.Output<string>. Its value is unknown at the time when the program first runs, so you can't use it inside string interpolation.
Instead, you can use apply function:
const eventRule = new aws.cloudwatch.EventRule("putimagerule", {
eventPattern: myTestRepo.repository.name.apply(repositoryName =>
JSON.stringify({
"detail-type": [
"AWS API Call via CloudTrail",
],
"source": ["aws.ecr"],
"detail": {
eventName: ["PutImage"],
repositoryName: [repositoryName],
},
})),
});
You can learn more in the Outputs and Inputs docs.
The issue is with the line "repositoryName": [myTestRepo.repository.name]
Try
export const myTestRepo = ECRTemplate('my-test-repo');
export const eventRule = new aws.cloudwatch.EventRule("putimagerule", {
eventPattern: {
"detail-type": [
"AWS API Call via CloudTrail"
],
"source": ["aws.ecr"],
"detail": {
"eventName": ["PutImage"],
"repositoryName": [myTestRepo.repository.name.apply(v => v.toJSON()]
}
});

Unable to send data to AWS elastic search instance using logstash

I am trying to send data to AWS elastic search end point using logstash that is installed on my local machine.
The logstash conf file looks like this
input {
file {
path => "/path/log.txt"
}
}
output {
amazon_es {
hosts => ["https://search-abclostashtrial-5jdfc43oqql7qsrhfgbvwewku.us-east-2.es.amazonaws.com"]
action => "index"
region => "us-east-2"
index => "trial"
ssl => true
}
}
The Elastic search Access policy looks like this
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-2:0415721453395:domain/abclostashtrial/*"
}
]
}
I am using logstash-output-amazon_es plugin to send the query like
sudo bin/logstash -f /path/logstash/abc.conf
And I get the following error log.
[ERROR] 2019-04-30 20:05:52.900 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[INFO ] 2019-04-30 20:05:53.165 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2019-04-30 20:05:58.037 [LogStash::Runner] runner - Logstash shut down.
What am I missing here ?
One option to start with is to create an AccessKey that has rights to write to Elasticsearch, and configure that in the output. Example:
amazon_es {
hosts => ["vpc-xxxxxxxxx-es-yyyyyy4pywmwigwi47em.us-east-1.es.amazonaws.com"]
region => "us-east-1"
aws_access_key_id => 'AKIxxxxxxxxxxx'
aws_secret_access_key => '11xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
index => "production-logindex-%{+YYYY.MM.dd}"
}

ZF2 & Doctrine ORM - How to turn off the DoctrineModule cache

I have been wondering how to switch on and off the DoctrineModule cache in my development environment.
Currently my queries are cached and stored in the data/DoctrineModule/cache folder and when bug testing this can cause constipation :)
Essentially I would like to have caching set for my production environment and switched off for my development environment and to do this obviously I need the correct settings for the configs and to have a local/global config setting to deal with this.
Here is a screenshot of where the files are stored:
The following is my test configuration file:
<?php
return [
'doctrine' => [
'connection' => [
'orm_default' => [
'driverClass' =>'Doctrine\DBAL\Driver\PDOMySql\Driver',
'params' => [
'host' => 'localhost',
'port' => '3306',
'user' => 'pw1',
'password' => 'pw1',
'dbname' => 'server',
'unix_socket' => '/Applications/MAMP/tmp/mysql/mysql.sock' //To use Doctrine Entity Generator
]
]
],
'eventmanager' => [
'orm_default' => [
'subscribers' => [
'Gedmo\Timestampable\TimestampableListener',
],
],
],
'configuration' => [
'orm_default' => [
'naming_strategy' => 'UnderscoreNamingStrategy',
],
],
'authentication' => [
'orm_default' => [
'object_manager' => 'Doctrine\ORM\EntityManager',
'identity_class' => 'RoleBasedUser\Entity\User',
'identity_property' => 'email',
'credential_property' => 'password',
'credential_callable' => function(\RoleBasedUser\Entity\User $user, $passwordGiven) {
$hashedPassword = $user->getPassword();
$passwordService = new \RoleBasedUser\Service\PasswordService();
return $passwordService->verify($passwordGiven, $hashedPassword);
},
],
],
]
];
I am certain I need to add a config, which one though I am unsure of.
thanks!
Okay this has been worked out.
In my module.config.php file I had set the doctrine driver to cache => filesystem, this I changed to array and the problem has now been resolved.
'doctrine' => [
'driver' => [
'RBU_driver' => [
'class' => 'Doctrine\ORM\Mapping\Driver\AnnotationDriver',
//cache => 'filesystem', <-- this will cache into the data/DoctrineModule
'cache' => 'array', //<-- this will not cache...
'paths' => [
__DIR__ . '/../src/RoleBasedUser/Entity'
]
],
'orm_default' => [
'drivers' => [
'RoleBasedUser\Entity' => 'RBU_driver'
]
]
],
],

Error on using aws change-resource-record-sets to add an alias

I am trying to use aws change-resource-record-sets to add an alias. The idea is to allow access to a Cloudfront distribution via URL on our domain (e.g. mydomainname.mycompany.co.uk rather than mydomainname.cloudfront.net where mydomainname=something like d4dzc6m38sq0mk)
After working through various other JSON errors, which I solved, I am still getting a problem.
A client error (InvalidChangeBatch) occurred: RRSet with DNS name
mydomainname.cloudfront.net. is not permitted in zone mycompany.co.uk.
What have I got wrong?
JSON:
{
"Comment": "Recordset for mydomainname",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "mydomainname",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "Z2FDTNDATAQYW2",
"DNSName": "mydomainname.cloudfront.net.",
"EvaluateTargetHealth": false
}
}
}
]
}
EDITED to clarify the HostedZoneID.
You need to pass complete name in the NAME parameter. for your example you need to pass this:
"Name" : "mydomainname.cloudfront.net."
If "The idea is to allow access to a Cloudfront distribution via URL on our domain..." then try a CNAME instead of an alias...
aws route53 change-resource-record-sets --hosted-zone-id Z3A********TC8 --change-batch file://~/tmp/awsroute53recordset.json
awsroute53recordset.json
{
"Comment": "Allow access to a Cloudfront distribution via URL on our domain",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "cdn.mycompany.co.uk",
"Type": "CNAME",
"TTL": 3600,
"ResourceRecords": [
{
"Value": "d4dzc6m38sq0mk.cloudfront.net"
}
]
}
}
]
}
You have to add the 'Change' => node.
'Comment' => 'Created Programmatically',
'Changes' => [
'Change'=>[
'Action' => 'CREATE',
'ResourceRecordSet' => [
'Name' => $domainName.'.',
'Type' => 'A',
'AliasTarget' => [
'HostedZoneId' => '*ZoneID*',
'DNSName' => '*DNSName*',
'EvaluateTargetHealth' => false
]
]
],