Unit testing assert that json response contains a list - unit-testing

When writing a unit test in phoenix framework how do you check if a json response contains a list.
Existing test is below which fails because children gets populated. I just want the test to tell me that my json response contains children and children is a list.
test "shows chosen resource", %{conn: conn} do
parent = Repo.insert! %Parent{}
conn = get conn, parent_path(conn, :show, parent)
assert json_response(conn, 200)["data"] == %{"id" => parent.id,
"children" => []}
end

I would use three asserts for this, using a pattern match assert first to assert the basic structure and extract id and children:
assert %{"id" => id, "children" => children} = json_response(conn, 200)["data"]
assert id == parent.id
assert is_list(children)
Note that this test will pass even if the map contains keys other than id and children.

With [json schema][2] you can generate a json to use with (https://github.com/jonasschmidt/ex_json_schema) to validate a full json structure.
iex> schema = %{
"type" => "object",
"properties" => %{
"foo" => %{
"type" => "string"
}
}
} |> ExJsonSchema.Schema.resolve
and
iex> ExJsonSchema.Validator.valid?(schema, %{"foo" => "bar"})
and remember have only one logical assertion per test” (http://blog.stevensanderson.com/2009/08/24/writing-great-unit-tests-best-and-worst-practises/)

Related

It is possible to somehow use forward_missing_to to fetch/set hash values?

I am building client for Asterisk Manager (AMI), which sending back "events" (strings in key-value format). Depending on event type the keys are different. Some known would be defined similar as a property, but I also want to access everything else similar way (I need only getter).
Below is example and I am confused about forward_missing_to, to fetch hash value (i.e. e.foo => e.data["foo"] and e.foo? => e.data["foo"]?
class Event
#data = Hash(String,String).new
def initialize(data : Hash(String, String))
#data.merge! data
end
# known field (event), here I could apply various validations etc.
def event=(#name : String)
#data["name"] = #name
end
def event : String
#data["name"].not_nil!
end
# Confusing about this:
forward_missing_to #data
end
Example of usage:
e = Event.new({"event" => "Test", "foo" => "bar"})
p e.event # => "Test" (only this works)
p e.foo # => expecting to get "bar"
p e.unknown # => should throw Unhandled exception: Missing hash key: "unknown" (KeyError)
p e.unknown? # => nil
forward_missing_to just means this:
e = Event.new({"event" => "Test", "foo" => "bar"})
# Same as invoking `foo` on the #data instance variable
# because you said forward_missing_to #data
e.foo
It doesn't mean invoking ["foo"] on #data.
You can do it like this:
macro method_missing(call)
{% if call.args.empty? && !call.named_args %}
#data[{{call.name.stringify}}]
{% else %}
{% raise "undefined method '#{call.name}' for #{#type}" %}
{% end %}
end
However, I wouldn't advise using method_missing, mainly because it might be eventually removed from the language (together with forward_missing_to).

How to map JSON::Any to custom Object in Crystal language?

How to map parsed JSON as JSON::Any type to custom Object?
In my case, I am working on chat client. Chat API can respond to requests with following JSON:
{"ok" => true,
"result" =>
[{"update_id" => 71058322,
"message" =>
{"message_id" => 20,
"from" => "Benjamin",
"text" => "hey"}}]}
Somewhere inside my API client code I parse these JSON to perform some basic health checks and pass the result to response consumer. In the consumer, I iterate over result array and try to convert each update to appropriate object:
module Types
class Update
JSON.mapping({
update_id: {type: Int32},
message: {type: Message},
})
end
end
module Types
class Message
JSON.mapping({
message_id: Int32,
date: Int32,
text: String,
})
end
end
return unless response["ok"]
response["result"].each do |data|
update = Types::Update.from_json(data)
end
Unfortunately, last line results in compile error:
no overload matches 'JSON::Lexer.new' with type JSON::Any
Apparently, Object.from_json can accept only String JSONs, but not parsed JSON. In my case data is JSON::Any object.
Dirty fix Types::Update.from_json(data.to_json) works, but it looks ridiculous.
What is the proper way to map JSON object to custom type preserving all nested structure?
JSON.mapping doesn't work nicely together with JSON.parse. To solve your problem you can create another mapping Types::Result and parse a hole json using Object.from_json which is even much more convenient to work with:
module Types
class Message
JSON.mapping(
message_id: Int32,
text: String
)
end
class Update
JSON.mapping(
update_id: Int32,
message: Message
)
end
class Result
JSON.mapping(
success: { key: "ok", type: Bool },
updates: { key: "result", type: Array(Update) }
)
end
end
result = Types::Result.from_json string_json
result.success # => true
result.updates.first.message.text # => "hey"

Parsing out PowerShell CommandLine Data from EventLog

Sending Windows Event Logs with WinLogBeat to Logstash - primarily focused on PowerShell events within the logs.
Example:
<'Data'>NewCommandState=Stopped SequenceNumber=1463 HostName=ConsoleHost HostVersion=5.1.14409.1005 HostId=b99970c6-0f5f-4c76-9fb0-d5f7a8427a2a HostApplication=C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe EngineVersion=5.1.14409.1005 RunspaceId=bd4224a9-ce42-43e3-b8bb-53a302c342c9 PipelineId=167 CommandName=Import-Module CommandType=Cmdlet ScriptName= CommandPath= CommandLine=Import-Module -Verbose.\nishang.psm1<'/Data'>
How can I extract the CommandLine= field using grok to get the following?
Import-Module -Verbose.\nishang.psm1
Grok is a wrapper around regular expressions. If you can parse data with a regex, you can implement it with grok.
Even though your scope is specific to the CommandLine field, parsing each of the fields in most key=value logs is pretty straightforward, and a single regex can be used for every field with some grok filters. If you intend to store, query, and visualize logs - the more data, the better.
Regular Expression:
First we start with the following:
(.*?(?=\s\w+=|\<|$))
.*? - Matches any character except for line terminators
(?=\s\w+=|\<|$)) - Positive lookahead that asserts the pattern must match the following
\s\w+= - Any word characters with a space prior to it, followed by a =
|\<|$ - Alternatively may match < or the end of the line so as not to include them in the matching group.
This means that each field can be parsed similar to the following:
CommandLine=(.*?(?=\s\w+=|\<|$))
Grok:
Now this means we can begin creating grok filters. The power of it is that reusable components may have semantic language applied to them.
/etc/logstash/patterns/powershell.grok:
# Patterns
PS_KEYVALUE (.*?(?=\s\w+=|\<|$))
# Fields
PS_NEWCOMMANDSTATE NewCommandState=%{PS_KEYVALUE:NewCommandState}
PS_SEQUENCENUMBER SequenceNumber=%{PS_KEYVALUE:SequenceNumber}
PS_HOSTNAME HostName=%{PS_KEYVALUE:HostName}
PS_HOSTVERSION HostVersion=%{PS_KEYVALUE:HostVersion}
PS_HOSTID HostId=%{PS_KEYVALUE:HostId}
PS_HOSTAPPLICATION HostApplication=%{PS_KEYVALUE:HostApplication}
PS_ENGINEVERSION EngineVersion=%{PS_KEYVALUE:EngineVersion}
PS_RUNSPACEID RunspaceId=%{PS_KEYVALUE:RunspaceId}
PS_PIPELINEID PipelineId=%{PS_KEYVALUE:PipelineId}
PS_COMMANDNAME CommandName=%{PS_KEYVALUE:CommandName}
PS_COMMANDTYPE CommandType=%{PS_KEYVALUE:CommandType}
PS_SCRIPTNAME ScriptName=%{PS_KEYVALUE:ScriptName}
PS_COMMANDPATH CommandPath=%{PS_KEYVALUE:CommandPath}
PS_COMMANDLINE CommandLine=%{PS_KEYVALUE:CommandLine}
Where %{PATTERN:label} will utilize the PS_KEYVALUE regular expression, and the matching group will be labeled with that value in JSON. This is where you can get flexible in naming fields you know.
/etc/logstash/conf.d/powershell.conf:
input {
...
}
filter {
grok {
patterns_dir => "/etc/logstash/patterns"
break_on_match => false
match => [
"message", "%{PS_NEWCOMMANDSTATE}",
"message", "%{PS_SEQUENCENUMBER}",
"message", "%{PS_HOSTNAME}",
"message", "%{PS_HOSTVERSION}",
"message", "%{PS_HOSTID}",
"message", "%{PS_HOSTAPPLICATION}",
"message", "%{PS_ENGINEVERSION}",
"message", "%{PS_RUNSPACEID}",
"message", "%{PS_PIPELINEID}",
"message", "%{PS_COMMANDNAME}",
"message", "%{PS_COMMANDTYPE}",
"message", "%{PS_SCRIPTNAME}",
"message", "%{PS_COMMANDPATH}",
"message", "%{PS_COMMANDLINE}"
]
}
}
output {
stdout { codec => "rubydebug" }
}
Result:
{
"HostApplication" => "C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe",
"EngineVersion" => "5.1.14409.1005",
"RunspaceId" => "bd4224a9-ce42-43e3-b8bb-53a302c342c9",
"message" => "<'Data'>NewCommandState=Stopped SequenceNumber=1463 HostName=ConsoleHost HostVersion=5.1.14409.1005 HostId=b99970c6-0f5f-4c76-9fb0-d5f7a8427a2a HostApplication=C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe EngineVersion=5.1.14409.1005 RunspaceId=bd4224a9-ce42-43e3-b8bb-53a302c342c9 PipelineId=167 CommandName=Import-Module CommandType=Cmdlet ScriptName= CommandPath= CommandLine=Import-Module -Verbose.\\nishang.psm1<'/Data'>",
"HostId" => "b99970c6-0f5f-4c76-9fb0-d5f7a8427a2a",
"HostVersion" => "5.1.14409.1005",
"CommandLine" => "Import-Module -Verbose.\\nishang.psm1",
"#timestamp" => 2017-05-12T23:49:24.130Z,
"port" => 65134,
"CommandType" => "Cmdlet",
"#version" => "1",
"host" => "10.0.2.2",
"SequenceNumber" => "1463",
"NewCommandState" => "Stopped",
"PipelineId" => "167",
"CommandName" => "Import-Module",
"HostName" => "ConsoleHost"
}

How to process multiline log entry with logstash filter?

Background:
I have a custom generated log file that has the following pattern :
[2014-03-02 17:34:20] - 127.0.0.1|ERROR| E:\xampp\htdocs\test.php|123|subject|The error message goes here ; array (
'create' =>
array (
'key1' => 'value1',
'key2' => 'value2',
'key3' => 'value3'
),
)
[2014-03-02 17:34:20] - 127.0.0.1|DEBUG| flush_multi_line
The second entry [2014-03-02 17:34:20] - 127.0.0.1|DEBUG| flush_multi_line Is a dummy line, just to let logstash know that the multi line event is over, this line is dropped later on.
My config file is the following :
input {
stdin{}
}
filter{
multiline{
pattern => "^\["
what => "previous"
negate=> true
}
grok{
match => ['message',"\[.+\] - %{IP:ip}\|%{LOGLEVEL:loglevel}"]
}
if [loglevel] == "DEBUG"{ # the event flush line
drop{}
}else if [loglevel] == "ERROR" { # the first line of multievent
grok{
match => ['message',".+\|.+\| %{PATH:file}\|%{NUMBER:line}\|%{WORD:tag}\|%{GREEDYDATA:content}"]
}
}else{ # its a new line (from the multi line event)
mutate{
replace => ["content", "%{content} %{message}"] # Supposing each new line will override the message field
}
}
}
output {
stdout{ debug=>true }
}
The output for content field is : The error message goes here ; array (
Problem:
My problem is that I want to store the rest of the multiline to content field :
The error message goes here ; array (
'create' =>
array (
'key1' => 'value1',
'key2' => 'value2',
'key3' => 'value3'
),
)
So i can remove the message field later.
The #message field contains the whole multiline event so I tried the mutate filter, with the replace function on that, but I'm just unable to get it working :( .
I don't understand the Multiline filter's way of working, if someone could shed some light on this, it would be really appreciated.
Thanks,
Abdou.
I went through the source code and found out that :
The multiline filter will cancel all the events that are considered to be a follow up of a pending event, then append that line to the original message field, meaning any filters that are after the multiline filter won't apply in this case
The only event that will ever pass the filter, is one that is considered to be a new one ( something that start with [ in my case )
Here is the working code :
input {
stdin{}
}
filter{
if "|ERROR|" in [message]{ #if this is the 1st message in many lines message
grok{
match => ['message',"\[.+\] - %{IP:ip}\|%{LOGLEVEL:loglevel}\| %{PATH:file}\|%{NUMBER:line}\|%{WORD:tag}\|%{GREEDYDATA:content}"]
}
mutate {
replace => [ "message", "%{content}" ] #replace the message field with the content field ( so it auto append later in it )
remove_field => ["content"] # we no longer need this field
}
}
multiline{ #Nothing will pass this filter unless it is a new event ( new [2014-03-02 1.... )
pattern => "^\["
what => "previous"
negate=> true
}
if "|DEBUG| flush_multi_line" in [message]{
drop{} # We don't need the dummy line so drop it
}
}
output {
stdout{ debug=>true }
}
Cheers,
Abdou
grok and multiline handling is mentioned in this issue https://logstash.jira.com/browse/LOGSTASH-509
Simply add "(?m)" in front of your grok regex and you won't need mutation. Example from issue:
pattern => "(?m)<%{POSINT:syslog_pri}>(?:%{SPACE})%{GREEDYDATA:message_remainder}"
The multiline filter will add the "\n" to the message. For example:
"[2014-03-02 17:34:20] - 127.0.0.1|ERROR| E:\\xampp\\htdocs\\test.php|123|subject|The error message goes here ; array (\n 'create' => \n array (\n 'key1' => 'value1',\n 'key2' => 'value2',\n 'key3' => 'value3'\n ),\n)"
However, the grok filter can't parse the "\n". Therefore you need to substitute the \n to another character, says, blank space.
mutate {
gsub => ['message', "\n", " "]
}
Then, grok pattern can parse the message. For example:
"content" => "The error message goes here ; array ( 'create' => array ( 'key1' => 'value1', 'key2' => 'value2', 'key3' => 'value3' ), )"
Isn't the issue simply the ordering of the filters. Order is very important to log stash. You don't need another line to indicate that you've finished outputting multiline log line. Just ensure multiline filter appears first before the grok (see below)
P.s. I've managed to parse a multiline log line fine where xml was appended to end of log line and it spanned multiple lines and still I got a nice clean xml object into my content equivalent variable (named xmlrequest below). Before you say anything about logging xml in logs... I know... its not ideal... but that's for another debate :)):
filter {
multiline{
pattern => "^\["
what => "previous"
negate=> true
}
mutate {
gsub => ['message', "\n", " "]
}
mutate {
gsub => ['message', "\r", " "]
}
grok{
match => ['message',"\[%{WORD:ONE}\] \[%{WORD:TWO}\] \[%{WORD:THREE}\] %{GREEDYDATA:xmlrequest}"]
}
xml {
source => xmlrequest
remove_field => xmlrequest
target => "request"
}
}

link_to passing query parameter

Need to be able to generate the following URL string
http://localhost:3000/admin/cities?q%5Bprovince_id_eq%5D=1&commit=Filter&order=city_name_asc
how does this link_to need to be setup?
link_to(p.cities.count, admin_cities_path)
You could just pass the query parameters as a hash to the URL helper, e.g. Running the following commands in my console, I get the following hash:
url = "http://localhost:3000/admin/cities?q%5Bprovince_id_eq%5D=1&commit=Filter&order=city_name_asc"
query = URI.parse(url).query
hash = Rack::Utils.parse_nested_query(query)
#=> { "q" => { "province_id_eq" => "1" }, "commit" => "Filter", "order" => "city_name_asc" }
Then you'd just do
admin_cities_url(hash)
To get back to the original URL.
Probably this will help you, take a look after the "link_to can also produce links with anchors or query strings"
link_to(p.cities.count, admin_cities_path(q: { province_id_eq: 1 }, order: "city_name_asc"))