Right now have old legacy site around 10-20 years. Gateway API provided by Authorize.net. As I see it's perl scripts with requests to payments APIs looking like that
$trans{'x_version'}='3.1';
$trans{'x_login'}='';
$trans{'x_tran_key'}="";
$trans{'x_delim_data'}='TRUE';
$trans{'x_delim_char'}='|';
if($test_mode>0) { $trans{'x_test_request'}='TRUE'; }
....
$trans{'id'}=$row[0];
$trans{'created'}=$row[1];
$trans{'status'}=$row[2];
# Customer Data
$trans{'x_cust_id'}=$row[13];
$trans{'x_customer_ip'}=$row[14];
$trans{'x_email'}=$row[15]; # - to check AVS;
if(length($trans{'x_email'})) {
$trans{'x_email'} =~ s/#/@/g; # encode email by authorize.net spec requirement
$trans{'x_email_customer'}='TRUE';
} else {
$trans{'x_email_customer'}='FALSE';
}
$trans{'x_first_name'}=$row[18];
$trans{'x_last_name'}=$row[19];
$trans{'x_company'}=$row[20];
$trans{'x_address'}=$row[21]; # - to check AVS;
$trans{'x_city'}=$row[22]; # - to check AVS;
$trans{'x_state'}=$row[23]; # - to check AVS;
$trans{'x_zip'}=$row[24]; # - to check AVS;
$trans{'x_country'}='USA';
$trans{'x_phone'}=$row[25]; # - to check AVS;
#$trans{'x_fp_sequence'}=1;
#$trans{'x_fp_timestamp'}=time();
$trans{'x_ship_to_first_name'}=$trans{'x_first_name'};
$trans{'x_ship_to_last_name'}=$trans{'x_last_name'};
$trans{'x_ship_to_company'}=$trans{'x_company'};
$trans{'x_ship_to_address'}=$trans{'x_address'};
$trans{'x_ship_to_city'}=$trans{'x_city'};
$trans{'x_ship_to_state'}=$trans{'x_state'};
$trans{'x_ship_to_zip'}=$trans{'x_zip'};
$trans{'x_ship_to_country'}=$trans{'x_country'};
# Invoice Information
$trans{'x_invoice_num'}=$row[16];
$trans{'x_description'}=$row[17];
# Transaction Data
$trans{'x_amount'}=$row[4];
$trans{'x_currency_code'}='USD';
$trans{'x_method'}=$row[5];
$trans{'x_type'}=$row[3];
and etc.
with request to
$req = POST( "https://secure2.authorize.net/gateway/transact.dll", [ %trans ] );
It's AIM Name Value Pair method and that not working more after 7.04.2020 and return status 3 credit card required error. As i understand last change in code of server was more then half year ago.
So i'm not sure what more good does in that situation.
Be happy to any help or advice.
Related
In [50]: r = confluence.search(cql=f'title contains "Agent Alert - {event_name}" and label = "agent-event"')
# prints the params of the request
{'cql': 'title contains "Agent Alert - SYS_THRESHOLD_REACHED" and label = "agent-event"', 'expend': 'body.view'}
And I get this error
In [49]: r.content
Out[49]: b'{"statusCode":400,"data":{"authorized":false,"valid":true,"allowedInReadOnlyMode":true,"errors":[],"successful":false},"message":"Could not parse cql : title contains \\"Agent Alert - SYS_THRESHOLD_REACHED\\" and label = \\"agent-event\\"","reason":"Bad Request"}'
However I tried using the exact string in confluence webUI and it works.
Managed to work it out...
it doesn't seem to like contains in the API, so have to use ~
the spaces need to be replaced with + within the title search
so turned out this is what is accepted
title~"Agent+Alert+-+SYS_THRESHOLD_REACHED" and label="agent-event"
I'm trying to run object tracking on a folder containing multiple videos. There are 5 videos in my bucket and following the documentation from here, it suggests using the wildcard (*) operator. However, when I run the entire script, only 1 video gets annotated and not the entire folder containing 5 videos. Also, response2.json does not get created as the output_uri in my GCS bucket.
To identify multiple videos, a video URI may include wildcards in the object-id. Supported wildcards: ' * ' to match 0 or more characters; ‘?’ to match 1 character.
https://googleapis.dev/python/videointelligence/latest/gapic/v1/types.html
Which is what I've done in my input_uri bit of the code:
gcs_uri = 'gs://video_intel/*'
If you check the screenshot, it should the bucket id name and shows multiple videos in the same folder.
Can anyone pls help with this question. Thanks.
Full script:
import os
os.environ['GOOGLE_APPLICATION_CREDENTIALS']='poc-video-intelligence-da5d4d52cb97.json'
"""Object tracking in a video stored on GCS."""
from google.cloud import videointelligence
video_client = videointelligence.VideoIntelligenceServiceClient()
features = [videointelligence.enums.Feature.OBJECT_TRACKING]
gcs_uri = 'gs://video_intel/*'
output_uri = 'gs://video_intel/response2.json'
operation = video_client.annotate_video(input_uri=gcs_uri, features=features, output_uri=output_uri)
print("\nProcessing video for object annotations.")
result = operation.result(timeout=300)
print("\nFinished processing.\n")
# The first result is retrieved because a single video was processed.
object_annotations = result.annotation_results[0].object_annotations
for object_annotation in object_annotations:
print("Entity description: {}".format(object_annotation.entity.description))
if object_annotation.entity.entity_id:
print("Entity id: {}".format(object_annotation.entity.entity_id))
print(
"Segment: {}s to {}s".format(
object_annotation.segment.start_time_offset.seconds
+ object_annotation.segment.start_time_offset.nanos / 1e9,
object_annotation.segment.end_time_offset.seconds
+ object_annotation.segment.end_time_offset.nanos / 1e9,
)
)
print("Confidence: {}".format(object_annotation.confidence))
# Here we print only the bounding box of the first frame in the segment
frame = object_annotation.frames[0]
box = frame.normalized_bounding_box
print(
"Time offset of the first frame: {}s".format(
frame.time_offset.seconds + frame.time_offset.nanos / 1e9
)
)
print("Bounding box position:")
print("\tleft : {}".format(box.left))
print("\ttop : {}".format(box.top))
print("\tright : {}".format(box.right))
print("\tbottom: {}".format(box.bottom))
print("\n")
please modify 'gcs_uri = 'gs://video_intel/'' to gcs_uri = 'gs://video_intel/.*'
SecondEdit: We have command line auditing enabled and the logs going to elasticsearch. Basically, I'll be doing this subst in logstash, or trying to. It's actually been almost nil but I'm trying to cover all the bases.
We are monitoring cmd line activity on hosts and while it's policy that you aren't supposed to enter your password in clear text on the cmd line, people will.
So I'm looking for a way to detect when someone enters their password and then subst out the password for hashes. The 1.1.1.8 is an example, it could be any ip address.
From this I want to detect if there is a password there
net use I: \1.1.1.8\E$ /user:domain\username password /persistent:yes
Look behind, almost seems to have it but I can't get it to stop after the space after username...
(?<=/user:)(.*)(?<=\s)
net use I: \1.1.1.8\E$ /user:domain\username password /persistent:yes
when I need it to get -
net use I: \1.1.1.8\E$ /user:domain\username password /persistent:yes
https://regexr.com/3i6va
... it would be something like this to gsub the password out and replace with ###
filter {
if [event_id] == 4688 {
mutate {
gsub => ["[event_data][CommandLine]", "(?<=\/user:)(.*)(?<=\s)",
"########" ]
}
}
}
I want to add scritpted field in Kibana 5 to get stored proc name from message. To be able to visualize number of errors per each SP.
I have field "message" where I can see error text:
"[2017-02-03 05:04:51,087] # MyApp.Common.Server.Logging.ExceptionLogger [ERROR]: XmlWebServices Exception
User:
Name: XXXXXXXXXXXXXXXXXXXXXXX
Email: 926715#test.com
User ID: 926715 (PAID)
Web Server: PERFTESTSRV
Exception:
Type: MyApp.Common.Server.DatabaseException
Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Source: MyApp.Common.Server
Database: MyDB
Cmd Type: StoredProcedure
Cmd Text: spGetData
Trans: YES
Trans Lvl: Unspecified"
Guide: https://www.elastic.co/blog/using-painless-kibana-scripted-fields
My plan is to add something like as a Painless script:
def m = /(?:Cmd\sText:\s*)[a-zA-Z]{1,}/.matcher(doc['message'].value);
if ( m.matches() ) {
return m.group(1)
} else {
return "no match"
}
And also I've tried
def tst = doc['message'].value;
if (tst != null)
{
def m = /(?:User\sID:\s*)[0-9]{1,}/.matcher(tst);
if ( m.matches() ) {
return m.group(1)
}
} else {
return "no match"
}
How I can address doc['message'].value?
When I try to do so I got error "Courier Fetch: 5 of 5 shards failed."
When I try doc['message.keyword'].value, I do not have full message inside. I do not understand where I can learn the structure of what message have inside and how I can refer it?
I assume that problem with lenght of message. It is too long to be type "keyword". It should be type "text" which is not supported by painless.
https://www.elastic.co/blog/using-painless-kibana-scripted-fields
Both Painless and Lucene expressions operate on fields stored in doc_values. So >for string data, you will need to have the string to be stored in data type >keyword. Scripted fields based on Painless also cannot operate directly on >_source.
https://www.elastic.co/guide/en/elasticsearch/reference/master/keyword.html_italic_
A field to index structured content such as email addresses, hostnames, status >codes, zip codes or tags.
If you need to index full text content such as email bodies or product >descriptions, it is likely that you should rather use a text field.
Now I'm developing a project about softlayer api. I wan't to get the os list by softlayer api. Just like the portal site. Is there certain method to get correct os list ? regards~
Is there a specific language example you are looking for? If you use the SoftLayer CLI you can do this with the following command
slcli vs create-options # For Virtual Guests
slcli server create-options # For Bare Metal Servers
Unfortunately, it's not possible to retrieve the same result than Control Portal making a single call, but it's possible using a programming language.
To see programming languages supported by SoftLayer:
SoftLayer Development Network
Take a look the following python script:
"""
List OSs for VSI similar than Portal
See below references for more details.
Important manual pages:
http://sldn.softlayer.com/reference/services/SoftLayer_Product_Package/getItemPrices
http://sldn.softlayer.com/article/object-filters
http://sldn.softlayer.com/article/object-Masks
#License: http://sldn.softlayer.com/article/License
#Author: SoftLayer Technologies, Inc. <sldn#softlayer.com>
"""
import SoftLayer
import datetime
import time
# Your SoftLayer's username and api Key
USERNAME = 'set me'
API_KEY = 'set me'
# Package id
packageId = 46
# Datacenter
datacenter = 'wdc04'
# Computing INstance
core = '1 x 2.0 GHz Core'
# Creating service
client = SoftLayer.Client(username=USERNAME, api_key=API_KEY)
packageService = client['SoftLayer_Product_Package']
# Declaring filters and mask to get additional information for items
filterDatacenter = {"itemPrices": {"pricingLocationGroup": {"locations": {"name": {"operation": datacenter}}}}}
objectMaskDatacenter = 'mask[pricingLocationGroup[locations]]'
objectMask = 'mask[pricingLocationGroup[locations],categories,item[id, description, capacity,softwareDescription[manufacturer],availabilityAttributeCount, availabilityAttributes[attributeType]]]'
filterInstance = {
'itemPrices': {
'categories': {
'categoryCode': {
'operation': 'os'
}
}
}
}
# Define a variable to get capacity
coreCapacity = 0
# To get item id information
itemId = 0
flag = False
# Define the manufacturers from which you like to get information
manufacturers = ["CentOS", "CloudLinux", "CoreOS", "Debian", "Microsoft", "Redhat", "Ubuntu"]
# Declare time to avoid list OS expired
now = time.strftime("%m/%d/%Y")
nowTime = time.mktime(datetime.datetime.strptime(now, "%m/%d/%Y").timetuple())
try:
conflicts = packageService.getItemConflicts(id=packageId)
itemPrices = packageService.getItemPrices(id=packageId, filter=filterDatacenter, mask=objectMask)
if len(itemPrices) == 0:
filterDatacenter = {"itemPrices":{"locationGroupId":{"operation":"is null"}}}
itemPrices = packageService.getItemPrices(id=packageId, filter=filterDatacenter, mask=objectMask)
for itemPrice in itemPrices:
if itemPrice['item']['description'] == core:
itemId = itemPrice['item']['id']
coreCapacity = itemPrice['item']['capacity']
result = packageService.getItemPrices(id=packageId, mask=objectMask, filter=filterInstance)
filtered_os = []
for item in result:
for attribute in item['item']['availabilityAttributes']:
expireTime = time.mktime(datetime.datetime.strptime(attribute['value'], "%m/%d/%Y").timetuple())
if ((attribute['attributeType']['keyName'] == 'UNAVAILABLE_AFTER_DATE_NEW_ORDERS') and (expireTime >= nowTime)):
filtered_os.append(item)
if item['item']['availabilityAttributeCount'] == 0:
filtered_os.append(item)
for manufacturer in manufacturers:
print(manufacturer)
for itemOs in filtered_os:
for conflict in conflicts:
if (((itemOs['item']['id'] == conflict['itemId']) and (itemId == conflict['resourceTableId'])) or ((itemId == conflict['itemId']) and (itemOs['item']['id'] == conflict['resourceTableId']))):
flag = False
break
else:
flag = True
if flag:
if itemOs['item']['softwareDescription']['manufacturer'] == manufacturer:
if 'capacityRestrictionMinimum' in itemOs:
if((itemOs['capacityRestrictionMinimum'] <= coreCapacity) and (coreCapacity <= itemOs['capacityRestrictionMaximum'])):
print("%s Price Id: %s Item Id: %s" % (itemOs['item']['description'], itemOs['id'], itemOs['item']['id']))
else:
print("%s Price Id: %s Item Id: %s" % (itemOs['item']['description'], itemOs['id'], itemOs['item']['id']))
print("---------------------------------------------------")
except SoftLayer.SoftLayerAPIError as e:
print('Unable to get Item Prices faultCode=%s, faultString=%s'
% (e.faultCode, e.faultString))
I added core variable, because the OSs have restriction for capacity of cores. Also I added datecenter to get the specific core item price for a specifc datacenter, perhaps it's something innecesary, but you can edit this script according your requirements.
The same idea could be applied for others programming languges.
I hope it helps, please let me know any doubt, comments or if you need further assistance.
Updated
I improved the script, I added the ability to check conflicts between items, in order to get the same result for each kind of Computing Instance