I am completely new to Perl and I need to create a tool that will consume a SOAP Webservice and I need to save the XML provided by this WS in an output file. At this point I can consume the webservice and save it as hash data, but I need it to be in XML format.
My code is pretty simple and goes like this:
#!/usr/bin/perl -w
use SOAP::Lite ( +trace => "all", maptype => {} );
use IO::File;
use Data::Dump "pp";
sub SOAP::Transport::HTTP::Client::get_basic_credentials {
return 'username' => 'password';
}
my $soap = SOAP::Lite
-> proxy('https://.../WebService.do?SOAP', ssl_opts => [ SSL_verify_mode => 0 ] );
my $method = SOAP::Data->name('execute') -> attr({xmlns => 'http://.../'});
my $output = IO::File->new(">output.xml");
my %keyHash = %{ $soap->call($method)->body};
print $output pp({%keyHash});
$output->close();
As I have the trace full on, I can see the XML that the webservice provide in the console while my program is executed, but, when it gets printed in the output file, I see the hash as defined in Perl, with the pairs of key => values, organized just like if it was a json:
{
Docs => {
AssetDefinition => "AccountNumber",
BatchId => 1,
Doc => [
{
AssetDefinitionId => "CNTR0016716",
DateForRetention => "",
FileName => "",
FilePath => "",
SequenceNumber => "",
},
],
},
}
The data is completely correct, but I needed it saved in the file as XML and at this point I think I am going in the wrong direction.
Any help will be fairly appreciated.
Thanks and regards,
Felipe
You are on the right track. The soap call just returns a perl data structure, a hash of hashes. You need an additional step to convert it to XML.
I would recommedn this module http://search.cpan.org/~grantm/XML-Simple-2.20/lib/XML/Simple.pm
use XML::Simple qw(:strict);
my $xml = XMLout(\%keyHash);
You can supply options to give more control over the XML formatting
Related
I am able to use the Athena API with startQueryExecution() to create a CSV file of the responses in S3. However, I would like to be able to return to my application a JSON response so I can further process the data. I am trying to return JSON results after I run startQueryExecution() via the API, how do I can get the results as a JSON response back?
I am using the AWS PHP SDK [https://aws.amazon.com/sdk-for-php/] , however this is relevant to any language since I can not find any answers to actually getting a response back, it just saves a CSV file to S3.
$athena = AWS::createClient('athena');
$queryx = 'SELECT * FROM elb_logs LIMIT 20';
$result = $athena->startQueryExecution([
'QueryExecutionContext' => [
'Database' => 'sampledb',
],
'QueryString' => 'SELECT request_ip FROM elb_logs LIMIT 20', // REQUIRED
'ResultConfiguration' => [ // REQUIRED
'EncryptionConfiguration' => [
'EncryptionOption' => 'SSE_S3' // REQUIRED
],
'OutputLocation' => 's3://xxxxxx/', // REQUIRED
],
]);
// check completion : getQueryExecution()
$exId = $result['QueryExecutionId'];
sleep(6);
$checkExecution = $athena->getQueryExecution([
'QueryExecutionId' => $exId, // REQUIRED
]);
if($checkExecution["QueryExecution"]["Status"]["State"] == 'SUCCEEDED')
{
$dataOutput = $athena->getQueryResults([
'QueryExecutionId' => $result['QueryExecutionId'], // REQUIRED
]);
while (($data = fgetcsv($dataOutput, 1000, ",")) !== FALSE) {
$num = count($data);
echo "<p> $num fields in line $row: <br /></p>\n";
$row++;
for ($c=0; $c < $num; $c++) {
echo $data[$c] . "<br />\n";
}
}
}
The Amazon Athena SDK will return the results of a query and then you can write (send) this as JSON. The SDK will not do this for you itself.
The API startQueryExecution() retuns QueryExecutionId. Use this to call getQueryExecution() to determine if the query is complete. Once the query completes call getQueryResults().
You can then process each row in the result set.
I am using logstash to parse log entries from an input log file.
LogLine:
TID: [0] [] [2016-05-30 23:02:02,602] INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} - Configured Registry in 572ms {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService}
Grok Pattern:
TID:%{SPACE}\[%{INT:SourceSystemId}\]%{SPACE}\[%{DATA:ProcessName}\]%{SPACE}\[%{TIMESTAMP_ISO8601:TimeStamp}\]%{SPACE}%{LOGLEVEL:MessageType}%{SPACE}{%{JAVACLASS:MessageTitle}}%{SPACE}-%{SPACE}%{GREEDYDATA:Message}
My grok pattern is working fine. I am sending these parse entries to an rest base api made by myself.
Configurations:
output {
stdout { }
http {
url => "http://localhost:8086/messages"
http_method => "post"
format => "json"
mapping => ["TimeStamp","%{TimeStamp}","CorrelationId","986565","Severity","NORMAL","MessageType","%{MessageType}","MessageTitle","%{MessageTitle}","Message","%{Message}"]
}
}
In the current output, I am getting the date as it is parsed from the logs:
Current Output:
{
"TimeStamp": "2016-05-30 23:02:02,602"
}
Problem Statement:
But the problem is that my API is not expecting the date in such format, it is expecting the date in generic xsd type i.e datetime format. Also, as mentioned below:
Expected Output:
{
"TimeStamp": "2016-05-30T23:02:02:602"
}
Can somebody please guide me, what changes I have to add in my filter or output mapping to achieve this goal.
In order to transform
2016-05-30 23:02:02,602
to the XSD datetime format
2016-05-30T23:02:02.602
you can simply add a mutate/gsub filter in order to replace the space character with a T and the , with a .
filter {
mutate {
gsub => [
"TimeStamp", "\s", "T",
"TimeStamp", ",", "."
]
}
}
plz here is the code i use to fetch all the files in a bucket s3 :
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$client = S3Client::factory(array(
'key' => 'key',
'secret' => 'secret',
'region' => 'eu-east-1'
));
$client->registerStreamWrapper();
$objects = $client->getListObjectsIterator(array(
'Bucket' => "bucket_name",
'Prefix' => 'folder_name/'
));
How to get only files which names are containing a string ?
Thanks.
That's impossible to do via S3 API, because S3 API lets you filter by string ONLY if that string is a prefix for the keys you are listing.
Therefore, you have to list all the keys, get that result to a variable and programatically filter that list later.
Of course, you can get a shorter list to start by using the prefix filter when you retrieve the list in first place.
I want to use logtash for parsing python log files , where can i find the resources that help me in doing that. For example:
20131113T052627.769: myapp.py: 240: INFO: User Niranjan Logged-in
In this I need to capture the time information and also some data information.
I had exactly the same problem/need. I couldn't really find a solution to this. No available grok patterns really matched the python logging output, so I simply went ahead and wrote a custom grok pattern which I've added naively into patterns/grok-patterns.
DATESTAMP_PYTHON %{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND},%{INT}
The logstash configuration I wrote gave me nice fields.
#timestamp
level
message
Added some extra field which I called pymodule which should show you the python module that was producing the log entry.
My logstash configuration file looks like this (ignore the sincedb_path this is simple a manner of forcing logstash to read the entire log file everytime you run it):
input {
file {
path => "/tmp/logging_file"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
grok {
match => [
"message", "%{DATESTAMP_PYTHON:timestamp} - %{DATA:pymodule} - %{LOGLEVEL:level} - %{GREEDYDATA:logmessage}" ]
}
mutate {
rename => [ "logmessage", "message" ]
}
date {
timezone => "Europe/Luxembourg"
locale => "en"
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss,SSS" ]
}
}
output {
stdout {
codec => json
}
}
Please note that
I give absolutely no guarantee that this is the best or even an
slightly acceptable solution.
Our Python log file has a slightly different format:
[2014-10-08 19:05:02,846] (6715) DEBUG:Our debug message here
So I was able to create a configuration file without any need for special patterns:
input {
file {
path => "/path/to/python.log"
start_position => "beginning"
}
}
filter {
grok {
match => [
"message", "\[%{TIMESTAMP_ISO8601:timestamp}\] \(%{DATA:pyid}\) %{LOGLEVEL:level}\:%{GREEDYDATA:logmessage}" ]
}
mutate {
rename => [ "logmessage", "message" ]
}
date {
timezone => "Europe/London"
locale => "en"
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss,SSS" ]
}
}
output {
elasticsearch {
host => localhost
}
stdout {
codec => rubydebug
}
}
And this seems to work fine.
Is it possible to get just the objects custom metadata from S3 without having to get the whole object? I've looked through the AWS SDK PHP 2 and searched google and SO with no clear answer, or maybe just not the answer I'm hoping for.
Thanks.
Maybe this would help for PHP 2? It uses the Guzzle framework which I'm not familiar with.
Executes a HeadObject command: The HEAD operation retrieves metadata from an object without returning the object itself. This operation is useful if you're only interested in an object's metadata. To use HEAD, you must have READ access to the object.
Final attempt using Guzzle framework (untested code):
use Guzzle\Service\Resource\Model
use Aws\Common\Enum\Region;
use Aws\S3\S3Client;
$client = S3Client::factory(array(
"key" => "YOUR ACCESS KEY ID",
"secret" => "YOUR SECRET ACCESS KEY",
"region" => Region::US_EAST_1,
"scheme" => "http",
));
// HEAD object
$headers = $client->headObject(array(
"Bucket" => "your-bucket",
"Key" => "your-key"
));
print_r($headers->toArray());
PHP 1.6.2 Solution
// Instantiate the class
$s3 = new AmazonS3();
$bucket = 'my-bucket' . strtolower($s3->key);
$response = $s3->get_object_metadata($bucket, 'üpløåd/î\'vé nøw béén üpløådéd.txt');
// Success?
var_dump($response['ContentType']);
var_dump($response['Headers']['content-language']);
var_dump($response['Headers']['x-amz-meta-ice-ice-baby']);
Credit to: http://docs.aws.amazon.com/AWSSDKforPHP/latest/#m=AmazonS3/get_object_metadata
Hope that helps!
AWS HEAD Object http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
use Aws\S3\S3Client;
use Guzzle\Common\Collection;
$client = S3Client::factory(array(
'key' => 'YOUR-AWS-KEY',
'secret' => 'YOUR-SECRET-KEY'
));
// Use Guzzle's toArray() method.
$result = $client->headObject(['Bucket' => 'YOUR-BUCKET-NAME', 'Key' => 'YOUR-FILE-NAME'])->toArray();
print_r($result['Metadata']);