Trouble parsing XML in logstash - elmah

Our applications are unstable and generate 1000's of elmah error log files. We are trying to see which areas of the applications are error prone and unstable. We have decided to use ElasticSearch, LogStash, Kibana to search through these logs and generate trends. I am trying to configure logstash for this scenario. I am getting the "Something is wrong with your configuration." error when running the "logstash agent -f logstash-simple2.conf" command. What could I be doing wrong? Any pointers are appreciated.
logstash-simple2.conf:
input {
stdin {
type => "stdin-type"
}
file {
type => "file"
pattern => ["Z:/PROD/availability2/2014-04-15/00/**/*.xml"]
}
}
output {
stdout { }
elasticsearch { embedded => true }
}
Input file:
Actual Error:
?[33mUsing milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.0/plugin-milestones {:level=>:warn}?[0m?[31mUnknown setting 'pattern' for file {:level=>:error}?[0m
Error: Something is wrong with your configuration.You may be interested in the '--configtest' flag which you can use to validate logstash's configuration before you choose to restart a running system.

You file input configuration is wrong, use Path instead of Pattern
input {
stdin {
type => "stdin-type"
}
file {
type => "file"
path => "Z:/PROD/availability2/2014-04-15/00/**/*.xml"
}
}
output {
stdout { }
elasticsearch { embedded => true }
}

Related

Retrieving the progress of getObject (aws-sdk)

I'm using node.js with the aws-sdk (for S3).... When I am downloading a huge file from s3, how can I regularly retrieve the progress of the download so that the front-end can show a progress bar? Currently I am using getObject. (https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getObject-property)
The code to download the file works. Here's a snippet of my code...
return await new Promise((resolve, reject) => {
this.s3.getObject(params, (error, data) => {
if (error) {
reject(error);
} else {
resolve(data.Body);
}
});
I'm just not sure how to hook into the progress as it's downloading. Thanks in advance for any insight!
You can utilize S3 byte-range fetching which allows the fetching of small parts of a file in S3. This capability then allows us to fetch large objects by dividing the file download into multiple parts which brings the following advantages:
Part download failure does not require full re-downloading of the file.
Download pause/resume capability.
Download progress tracking
Retry packets that failed or interrupted by network issues
Sniff headers located in the first few bytes of the file if we just need to get metadata from the files.
You can split the file download by your size of choice (I propose 1-4mb at a time) and download the parts chunk by chunk, when each of the get object promises complete, you can trace how many have completed. A good start is by looking at the AWS documentation.
STREAMING OPTION
Another option is to use a stream and track the amount of bytes received:
const { ContentLength: contentLength } = await s3.headObject(params).promise();
const rs = s3.getObject(s3Params).createReadStream();
let progress = 0;
rs.on('data', function (chunk) {
// Advance your progress by chunk.length
progress += chunk.length;
console.log(`Progress: ${progress / contentLength}%`);
});
// ... pipe to write stream

Jenkins Pipeline if-else script not giving expect result

I've been coming here for years and usually find the answer i seek but this time I have a rather specific question.
I am building a pipeline that runs through a set of steps in one pipeline with a 3 tier path path to prod, using a choice, couple of string, and withCredentials param. Which work fine, up until my prod deploy where it's failing an "if else" test.
I have a (secret text) jenkins credential with a basic password, that I am attempting to compare to the string entered on build start. I have checked the spell with basic usage and it works as expected. BUT when I add it to my full pipeline it fails.
I am thinking it is on account of my not using the correct syntax with steps, script, node, or order...? This is a new space for me and I'm hoping that someone who's spent more time in this code space will see my error. Thanks! In advance!
Fails:
...
stage('Deploy_PROD') {
when {
expression { params.DEPLOY_TO == 'Deploy_PROD'}
}
steps{
withCredentials([string(credentialsId: '${creds}', variable: 'SECRET')]) {
script {
if ('${password}' == '$SECRET') {
sh 'echo yes'
} else {
sh 'echo no'
}
}
}
}
}
Works:
stage('example')
node {
withCredentials([string(credentialsId: '${creds}', variable: 'SECRET')]) {
if ('${password}' == '$SECRET') {
sh 'echo "test"'
} else {
sh 'echo ${password}'
}
}
}
Solution would be
if (password == SECRET) {
Also a recommended read - What's the difference of strings within single or double quotes in groovy?
I ended up using the withCredentials option with our AD server that allowed finer control over users' access to deploy to controlled environments. thanks for the assist.

Nuclio - "/bin" is not a valid file

When I try to run Nuclio, I receive the following error:
nuclio\plugin\fileSystem\reader\FileReaderException "/bin" is not a valid file.
This is a new installation with a custom application. I moved the application into a folder named "private".
What should I do to fix the problem?
This is likely due to the application not receiving a correct config path.
In your init.hh file, add an args key and provide the application constructor parameters as shown in the below example:
<?hh //patial
return HH\Map {
'sampleApp\\SampleApp' => HH\Map {
'autoInit' => true,
'args'=>HH\Vector
{
'/', //URI Binding
__DIR__.'/sampleApp/config' //Config Dir
}
}
};
Without this, the Application plugin will try to search for config but eventually give up giving the resulting error.
We'll make the error more obvious in a future release.

Parse on AWS Issues

I have recently migrated my Parse.com service over to AWS Elastic Beanstalk running the Parse Server project from Github. Everything seems to be working fine except when I try to perform a query in Cloud Code.
Whenever I try to run a Parse.Query command I get the following exception at runtime.
Uncaught internal server error. [ReferenceError: atom is not defined] ReferenceError: atom is not defined
at /usr/local/lib/node_modules/parse-server/lib/Adapters/Storage/Mongo/MongoTransform.js:559:78
at Array.map (native)
at transformConstraint (/usr/local/lib/node_modules/parse-server/lib/Adapters/Storage/Mongo/MongoTransform.js:556:29)
at transformQueryKeyValue (/usr/local/lib/node_modules/parse-server/lib/Adapters/Storage/Mongo/MongoTransform.js:193:7)
at transformWhere (/usr/local/lib/node_modules/parse-server/lib/Adapters/Storage/Mongo/MongoTransform.js:215:15)
at MongoStorageAdapter.find (/usr/local/lib/node_modules/parse-server/lib/Adapters/Storage/Mongo/MongoStorageAdapter.js:321:59)
at /usr/local/lib/node_modules/parse-server/lib/Controllers/DatabaseController.js:827:33
at run (/usr/local/lib/node_modules/parse-server/node_modules/babel-polyfill/node_modules/core-js/modules/es6.promise.js:89:22)
at /usr/local/lib/node_modules/parse-server/node_modules/babel-polyfill/node_modules/core-js/modules/es6.promise.js:102:28
at flush (/usr/local/lib/node_modules/parse-server/node_modules/babel-polyfill/node_modules/core-js/modules/_microtask.js:18:9)
Here is a sample of the Cloud Code I'm running. I must mention this code worked perfectly when hosted on Parse.com.
Parse.Cloud.define("getNumberOfUnreadMessages", function(request, response) {
var currentUser = request.params.user;
console.log("[getNumberOfUnreadMessages] Get User: " + JSON.stringify(currentUser));
var query = new Parse.Query("messages");
query.containedIn("toUser", [currentUser]);
query.equalTo("read", false);
query.find({
success: function(results) {
console.log('[getNumberOfUnreadMessages] Results: ' + results.length);
response.success(results.length);
},
error: function(e) {
response.error("[getNumberOfUnreadMessages] Error: " + JSON.stringify(e));
}
});
});
Any ideas what the problem could be?
Thanks!
So it turns out the issue has nothing todo with the server configuration. It was simply that I was trying to perform a Parse.Query.or function with a full object as apposed to a pointer to an object. Annoying that parse didn't give me a proper error, but in this case there is no bug.

Gradle: How do I publish a zip from a non-java project and consume it in a java project?

I have a multi-project setup. I created a non-java project whose artifact is a zip file that I will unzip in another project. So the idea is as below
my-non-java-project
build.gradle
------------
apply plugin: 'base'
task doZip(type:Zip) { ... }
task build(dependsOn: doZip) << {
}
artifacts {
archives doZip
}
some-java-project
build.gradle
------------
apply plugin: 'war'
configurations {
includeContent // This is a custom configuration
}
dependency {
includeContent project(':my-non-java-project')
}
task expandContent(type:Copy) {
// Here is where I would like get hold of the all the files
// belonging to the 'includeContent' configuration
// But this is always turning out to be empty. Not sure how do I publish
// non-java content to the local repository (as understood by groovy)
}
So, my question is, how do I publish the artifacts of a non-java project to the internal repository of groovy such that I can pick it up at another java-based project?
Not exactly sure what you're after, but here's a quick-and-dirty way to get access to the FileCollection of the :my-non-java-project:doZip task outputs:
project(":my-non-java-project").tasks.getByName("doZip").outputs.files
Note that the archives configuration is added by the Java plugin, not the Base plugin. But you can still define a custom configuration in my-non-java-project and add the artifact to it with the code in your OP:
//in my-non-java-project/build.gradle
configurations {
archives
}
artifacts {
archives doZip
}
Then you can access the task outputs via the configuration, like so (again, quick-and-dirty):
//in some-java-project/build.gradle
project(":my-non-java-project").configurations.archives.artifacts.files
Note that you can expand the content of your zip file using zipTree.
If you need to actually publish the zip produced by my-non-java-project, you can read about that here.