I have been wondering how to switch on and off the DoctrineModule cache in my development environment.
Currently my queries are cached and stored in the data/DoctrineModule/cache folder and when bug testing this can cause constipation :)
Essentially I would like to have caching set for my production environment and switched off for my development environment and to do this obviously I need the correct settings for the configs and to have a local/global config setting to deal with this.
Here is a screenshot of where the files are stored:
The following is my test configuration file:
<?php
return [
'doctrine' => [
'connection' => [
'orm_default' => [
'driverClass' =>'Doctrine\DBAL\Driver\PDOMySql\Driver',
'params' => [
'host' => 'localhost',
'port' => '3306',
'user' => 'pw1',
'password' => 'pw1',
'dbname' => 'server',
'unix_socket' => '/Applications/MAMP/tmp/mysql/mysql.sock' //To use Doctrine Entity Generator
]
]
],
'eventmanager' => [
'orm_default' => [
'subscribers' => [
'Gedmo\Timestampable\TimestampableListener',
],
],
],
'configuration' => [
'orm_default' => [
'naming_strategy' => 'UnderscoreNamingStrategy',
],
],
'authentication' => [
'orm_default' => [
'object_manager' => 'Doctrine\ORM\EntityManager',
'identity_class' => 'RoleBasedUser\Entity\User',
'identity_property' => 'email',
'credential_property' => 'password',
'credential_callable' => function(\RoleBasedUser\Entity\User $user, $passwordGiven) {
$hashedPassword = $user->getPassword();
$passwordService = new \RoleBasedUser\Service\PasswordService();
return $passwordService->verify($passwordGiven, $hashedPassword);
},
],
],
]
];
I am certain I need to add a config, which one though I am unsure of.
thanks!
Okay this has been worked out.
In my module.config.php file I had set the doctrine driver to cache => filesystem, this I changed to array and the problem has now been resolved.
'doctrine' => [
'driver' => [
'RBU_driver' => [
'class' => 'Doctrine\ORM\Mapping\Driver\AnnotationDriver',
//cache => 'filesystem', <-- this will cache into the data/DoctrineModule
'cache' => 'array', //<-- this will not cache...
'paths' => [
__DIR__ . '/../src/RoleBasedUser/Entity'
]
],
'orm_default' => [
'drivers' => [
'RoleBasedUser\Entity' => 'RBU_driver'
]
]
],
],
Related
I want to send a message to a queue using the AWS SDK. The instances of my worker environment should accept and execute the message. Problem here is that I don't know how to pass the command to run a particular php script.
$params = [
'MessageAttributes' => [
"name" => [
"DataType" => "String",
"StringValue" => "test"
],
"url" => [
"DataType" => "String",
"StringValue" => "/jobs/send_test_email.php"
],
],
'MessageBody' => "Test",
'QueueUrl' => $queueUrl
];
I think I have to pass the command via the MessageAttribute, but I can't find any other examples than "Hello World"
MySQL and elasticsearch are hosted on aws. And the logstash is running on an ec2. I am not using a VPC.I can connect to MySQL locally or on my ec2.
Modifying question a bit. Here is the new logstash file in my ec2. My es2 instance is not SSL certified, is that a problem?
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://aws.xxxxx.us-east-1.rds.amazonaws.com:3306/stuffed?user=admin&password=pword"
jdbc_user => "admin"
jdbc_password => "pword"
schedule => "* * * * *"
jdbc_validate_connection => true
jdbc_driver_library => "mysql-connector-java-8.0.19.jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
statement => "SELECT * from foo"
type => "foo"
tags => ["foo"]
}
jdbc {
jdbc_connection_string => "jdbc:mysql://aws.xxxxx.us-east-1.rds.amazonaws.com:3306/stuffed?user=admin&password=pword"
jdbc_user => "admin"
jdbc_password => "pword"
schedule => "* * * * *"
jdbc_validate_connection => true
jdbc_driver_library => "mysql-connector-java-8.0.19.jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
statement => "SELECT * from cat"
type => "cat"
tags => ["cat"]
}
jdbc {
jdbc_connection_string => "jdbc:mysql://aws.xxxxx.us-east-1.rds.amazonaws.com:3306/stuffed?user=admin&password=pword"
jdbc_user => "admin"
jdbc_password => "pword"
schedule => "* * * * *"
jdbc_validate_connection => true
jdbc_driver_library => "mysql-connector-java-8.0.19.jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
statement => "SELECT * from rest"
type => "rest"
tags => ["rest"]
}
}
output {
stdout { codec => json_lines }
if "foo" in [tags] {
amazon_es {
hosts => ["https://es1.stuffed-es-mysql.us-east-1.es.amazonaws.com"]
index => "foo"
region => "us-east-1"
aws_access_key_id => "id"
aws_secret_access_key => "key"
document_type => "foo-%{+YYYY.MM.dd}"
}
}
if "cat" in [tags] {
amazon_es {
hosts => ["https://es1.stuffed-es-mysql.us-east-1.es.amazonaws.com"]
index => "cat"
region => "us-east-1"
aws_access_key_id => "id"
aws_secret_access_key => "key"
document_type => "cat-%{+YYYY.MM.dd}"
}
}
if "rest" in [tags] {
amazon_es {
hosts => ["https://es1.stuffed-es-mysql.us-east-1.es.amazonaws.com"]
index => "rest"
region => "us-east-1"
aws_access_key_id => "id"
aws_secret_access_key => "key"
document_type => "rest-%{+YYYY.MM.dd}"
}
}
}
Now the issue I'm getting is a 403 forbidden error.
I did create a user in AWS with AmazonESFullAccess (AWS managed policy) permission. I'm not sure what else to do anymore. I am not using a VPC, I was trying to avoid that. So I want to stick with public access.
I tried creating new ElasticSearch Service based off this guide: https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-gsg.html
but I get an error error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://user:xxxxxx#mysql-abcdefghijkl.us-east-1.es.amazonaws.com:9200/]
and the logstash ouput for this instance is:
elasticsearch {
hosts => ["https://es-mysql.us-east-1.es.amazonaws.com/"]
index => "category"
user => "user"
password => "password"
document_type => "cat-%{+YYYY.MM.dd}"
}
Obviously this isn't the preferred method, but I'm really just trying to setup a dev/personal environment.
Also, I am able to login into Kibana with this instance.
ACCESS POLICY
For the first elastic search service:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111:user/root"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:11111111:domain/stuffed-es-mysql/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:166216189490:domain/stuffed-es-mysql/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"11.11.11.111",
"111.111.1.111",
"111.111.1.1",
"1.11.111.111",
]
}
}
}
]
}
For the second ElasticSearch Service I created:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:xxxxxxxxx:domain/es-mysql/*"
}
]
}
SO I resolved the two issues.
Problem one: I was still having a problem with my logstash input connection to my RDS (which wasn't the problem posted but I thought I'd share anyway) :
Problem:
The RDS MySQL instance was the wrong version. It was set to the recommended value by AWS (5.7) and I need the latest 8.0.*. This was causing a problem with jdbc not working correctly.
Solution:
Update RDS MySQL instance to 8.0.17. Now my logstash was able to read the input from MySQL RDS.
Problem two: The output of my logstash was not working.
Problem:
Receiving a 403 forbidden error.
Solution:
Removed the https requirement when setting up the ES service. This most likely worked for me because my ec2 is not ssl certified.
The solution to problem two was questioned before in my post. And being new to logstash setup on an ec2, and elasticsearch service (AWS) I was not going with my instincts. But, I removed the https requirement when setting up the ES service. Yes not the greatest idea, but this a dev environment. This fixed the issue. Why? Because my ec2 service is not ssl certified. This makes sense. One of the common problems you get with a 403 error is because the source is sending a request to a SSL certified destination.
I think this is a common problem for people who want to run something on a budget and it's overlooked. Most people want to jump to your Access Policy, IP Policy or Security group and that's understandable.
I am trying to get Chutzpah and Jasmine working together in visual studio, my end goal is to get unit tests running with TeamCity integration.
On save, all of the typescript code generates a single .js file. This also causes Chutzpah to run my tests, so far so good.
My issue is Chutzpah reports 0 passed, 0 failed and 0 errors. The Jasmine html file that is generated lists out all of my tests correctly but Chutzpa doesn't seem to receive any information back from Jasmine.
Highlights of a trace log:
Trying to build test context for c:\.....\test.ts
Building test context for c:\.....\test.ts
...framework dependencies / other ok looking things... (~15 lines)
Finished building test context for c:\.....\test.ts
Warning: 0 : Message:Chutzpah determined generated .js files are missing but the compile
mode is External so Chutzpah can't compile them. Test results may be wrong.
Then it starts Phantom js and logs loading / receiving resources. My test.ts file is not one of the recources listed but the site-wide .js is (I checked the site-wide file and my tests are being appended to it).
Finished test run for c:\......\test.ts in Discovery mode
Cleaning up test context for c:\......\test.ts
Chutzpah run finished with 0 passed, 0 failed and 0 errors
Chutzpah.json file cache cleared
End Test Adapter Discover Tests
chutzpah.json
{
"Framework": "jasmine",
"EnableTestFileBatching": true,
"Compile": {
"Mode": "External",
"Extensions": [ ".ts" ],
"ExtensionsWithNoOutput": [ ".d.ts" ],
"Paths": [
{
"OutputPath": "../SiteWide.js",
"SourcePath": "Views"
}
]
},
"References": [
{
"Path": "../knockout-3.4.2.js",
"IsTestFrameworkFile": true
}
],
"Tests": [
{
"Includes": [ "*.ts" ],
"Path": "../Tests/Views"
}
],
"EnableTracing": true,
"TraceFilePath": "./trace.log"
}
tests.ts
describe('configuring unit tests for typescript!', () => {
it('this should pass', () => {
expect(1).toBe(1);
});
it('this should fail', () => {
expect(1).toBe(0);
});
});
There are a few things I'm suspicious of (the missing .js files line from the trace - but that might just be caused by my single js file compilation step?)
Maybe I'm missing references to jasmine in my chutzpah.json?
I'm at a loss for why the Jasmine tests work, but Chutzpah doesn't report back.
Maybe late...
But something like this in chutzpah.json would help.
{
"Framework": "jasmine",
"Compile": {
"Mode": "External",
"Extensions": [ "*.ts" ],
"ExtensionsWithNoOutput": [ "*.d.ts" ]
},
"References": [
{ "Path": "node_modules/promise-polyfill/dist", "Include": "*.js", "Exclude": "*.d.ts" },
{ "Path": "node_modules/systemjs/dist", "Include": "*.js", "Exclude": "*.d.ts" }
],
"Tests": [
{ "Path": "unittests", "Includes": [ "*.spec.ts" ], "Excludes": [ "*.d.ts" ], "ExpandReferenceComments": "true" }
]
}
Having your system related files is important in the references. Also you can try "*.spec.js" in the Tests section
I'm trying to use a grok filter in logstash version 1.5.0 to parse several fields of data from a log file.
I'm able to parse a simple WORD field with no issues, but when I try to define a custom pattern and add that in as well, the grok parse fails.
I've tried using a couple of grok debuggers which have been recommended elsewhere to find an issue:
http://grokconstructor.appspot.com/do/match
and
http://grokdebug.herokuapp.com/
both say that my regex should be fine, and return the fields that I want, but when I add it to my logstash.conf, grok fails to parse the log line and simply passes through the raw data to elasticsearch.
My sample line is as follows:
APPERR [2015/06/10 11:28:56.602] C1P1405 S39 (VPTestSlave002_001)| 8000B Connect to CGDialler DB (VPTest - START)| {39/A612-89A0-A598/60B9-1917-B094/9E98F46E} Failed to get DB connection: SQLConnect failed. 08001 (17) [Microsoft][ODBC SQL Server Driver][DBNETLIB]SQL Server does not exist or access denied.
My logstash.conf grok config looks like this:
grok
{
patterns_dir => ["D:\rt\Logstash-1.5.0\bin\patterns"]
match => {"message" => "%{WORD:LogLevel} \[%{KERNELTIMESTAMP:TimeStamp}\]"}
}
and the contents of my custom pattern file are:
KERNELTIMESTAMP %{YEAR}/%{MONTHNUM}/%{MONTHDAY} %{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?
I am expecting this to return the following set of data:
{
"LogLevel": [
[
"APPERR"
]
],
"TimeStamp": [
[
"2015/06/10 11:28:56.602"
]
],
"YEAR": [
[
"2015"
]
],
"MONTHNUM": [
[
"06"
]
],
"MONTHDAY": [
[
"10"
]
],
"HOUR": [
[
"11",
null
]
],
"MINUTE": [
[
"28",
null
]
],
"SECOND": [
[
"56.602"
]
],
"ISO8601_TIMEZONE": [
[
null
]
]
}
Can anyone tell me where my issue is?
I am trying to use aws change-resource-record-sets to add an alias. The idea is to allow access to a Cloudfront distribution via URL on our domain (e.g. mydomainname.mycompany.co.uk rather than mydomainname.cloudfront.net where mydomainname=something like d4dzc6m38sq0mk)
After working through various other JSON errors, which I solved, I am still getting a problem.
A client error (InvalidChangeBatch) occurred: RRSet with DNS name
mydomainname.cloudfront.net. is not permitted in zone mycompany.co.uk.
What have I got wrong?
JSON:
{
"Comment": "Recordset for mydomainname",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "mydomainname",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "Z2FDTNDATAQYW2",
"DNSName": "mydomainname.cloudfront.net.",
"EvaluateTargetHealth": false
}
}
}
]
}
EDITED to clarify the HostedZoneID.
You need to pass complete name in the NAME parameter. for your example you need to pass this:
"Name" : "mydomainname.cloudfront.net."
If "The idea is to allow access to a Cloudfront distribution via URL on our domain..." then try a CNAME instead of an alias...
aws route53 change-resource-record-sets --hosted-zone-id Z3A********TC8 --change-batch file://~/tmp/awsroute53recordset.json
awsroute53recordset.json
{
"Comment": "Allow access to a Cloudfront distribution via URL on our domain",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "cdn.mycompany.co.uk",
"Type": "CNAME",
"TTL": 3600,
"ResourceRecords": [
{
"Value": "d4dzc6m38sq0mk.cloudfront.net"
}
]
}
}
]
}
You have to add the 'Change' => node.
'Comment' => 'Created Programmatically',
'Changes' => [
'Change'=>[
'Action' => 'CREATE',
'ResourceRecordSet' => [
'Name' => $domainName.'.',
'Type' => 'A',
'AliasTarget' => [
'HostedZoneId' => '*ZoneID*',
'DNSName' => '*DNSName*',
'EvaluateTargetHealth' => false
]
]
],