For Geb's given when then, is there a way to ignore an assertion in 'then' - assert

Situation
For my test, I have a step that asserts the return value of a method. The method takes a boolean value as a parameter (among other things). I could create a new column with that boolean value, however _variableB happens to mirror the boolean value I want.
Problem
When I run this step, Geb asserts on _variableB == 'some text' itself, rather than just using it as a parameter.
then:
MethodA(something, something, _variableB == 'some text')
where:
_variableA | _variableB
Question
Is there a way of ignoring/disabling this assertion?
Edit: inserted received error
Received error
Condition not satisfied:
Validator.waitForxxxxxxOnValidate(_xxxxxxType, xxxxxData.xxxxxxIdHex, _receivers == 'partner2')
| | | | | | |
| null xxxxxx-xxx | | partner1 false
class xx.xxxx.partner2.helpers.xxxxxxValidator | | 1 difference (75% similarity)
| | partner(1)
| | partner(2)

First of all, Geb does not assert anything because it's not a testing framework, it's a browser automation framework. You seem to be using Spock, which is a testing framework and which performs implicit assertions based on given/when/then code labels.
Secondly, your assumption that _receivers == 'partner2' is asserted is incorrect. All that happens is that when the implicit assertion on the call to Validator.waitForxxxxxxOnValidate() fails (because it returns null as per the failure you included) Spock is printing partial values of the asserted expression to make it easier for you to understand why the assertion failed. It just shows you that the value of _receivers is "partner1" and that because it's not equal to "partner2" false is passed as the last argument of the call to Validator.waitForxxxxxxOnValidate()

Related

Django Exclude Model Objects from query using a list

Having trouble with my query excluding results from a different query.
I have a table - Segment that I have already gotten entries from. It is related
to another table - Program, and I want to also run the same query on it but I want to exclude
any of the programs that were already found during the segment query.
When I try to do it, the list isn't allowed to be used in the comparison... See below:
query = "My Query String"
segment_results = Segment.objects.filter(
Q(title__icontains=query)|
Q(author__icontains=query)|
Q(voice__icontains=query)|
Q(library__icontains=query)|
Q(summary__icontains=query) ).distinct()
# There can be multiple segments in the same program
unique_programs = []
for segs in segment_results:
if segs.program.pk not in unique_programs:
unique_programs.append(segs.program.pk)
program_results = ( (Program.objects.filter(
Q(title__icontains=query) |
Q(library__icontains=query) |
Q(mc__icontains=query) |
Q(producer__icontains=query) |
Q(editor__icontains=query) |
Q(remarks__icontains=query) ).distinct()) &
(Program.objects.exclude(id__in=[unique_programs])))
I can run:
for x in unique_programs:
p = Program.objects.filter(id=x)
print("p = %s" % p)
And I get a list of Programs...which works
Just not sure how to incorporate this type of logic into the results
query...and have it exclude at the same time. I tried exclude keyword,
but the main problem is it doesn't like the list being in the query - I get an
error:
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'.
Feel like I am close...
The answer is simple, I was not comparing objects correctly in the filters, so
the correct statement would be:
program_results = (Program.objects.filter(
Q(title__icontains=query) |
Q(library__icontains=query) |
Q(mc__icontains=query) |
Q(producer__icontains=query) |
Q(editor__icontains=query) |
Q(remarks__icontains=query) )&
(Program.objects.exclude(id__in=Program.objects.filter(id__in=unique_programs))))

Selection / Filtering by kind

I would like to select or filter scenarios by kinds in my Capella Project. When I use:
ownedScenarios.kind
It returns:
FUNCTIONAL
DATA_FLOW
FUNCTIONAL
DATA_FLOW
The first request I tried returns an empty set:
ownedScenarios->select(myScenario | myScenario.kind='DATA_FLOW')
The second one returns "ERROR: Couldn't find the 'filter(Set(EClassifier=Scenario),EClassifier=ScenarioKind)' service (78, 124)"
ownedScenarios->filter(interaction::ScenarioKind::DATA_FLOW)
Any idea why?
Thanks
interaction::ScenarioKind is an EEnum (an enumeration) and interaction::ScenarioKind::DATA_FLOW an EEnumLiteral (a value from the interaction::ScenarioKind enumeration) but the service filter() use an EClass as parameter. In order to filter on an EEnumLiteral you can use the select() service as in your first attempt:
ownedScenarios->select(myScenario | myScenario.kind = interaction::ScenarioKind::DATA_FLOW)

AWS Redshift Query too long exception

Been trying to run a consolidation query in AWS Redshift(Type: RA3.4xLarge) where the length of the query is around 6k characters( I know, pretty huge!!).
Now this query fails with Below error.
psycopg2.errors.InternalError_: Value too long for character type
DETAIL:
-----------------------------------------------
error: Value too long for character type
code: 8001
context: Value too long for type character varying(1)
query: 388111
location: string.cpp:175
process: query0_251_388111 [pid=13360]
-----------------------------------------------
On further digging, found that the stl_query table logs every query run on the cluster and this has a 4k character limit on the column querytxt which leads to the above failure of the entire query.
View "pg_catalog.stl_query"
Column | Type | Modifiers
----------------------------+-----------------------------+-----------
userid | integer |
query | integer |
label | character(320) |
xid | bigint |
pid | integer |
database | character(32) |
querytxt | character(4000) |
starttime | timestamp without time zone |
endtime | timestamp without time zone |
aborted | integer |
insert_pristine | integer |
concurrency_scaling_status | integer |
So, the question here is (apart from reducing the query length), is there any work-around this situation ? Or am I deducing this whole thing wrong?
You are off track on your analysis - I've worked on many Redshift queries over 4K in size (unfortunately). Unless they broke something in a recent release this isn't your issue.
First off stl_query only stores the first 4K characters of the query - if you want to see entire long queries you need to look at the table stl_querytext. Be aware that stl_querytext breaks the query into 200 character chunks and you need to reassemble them using the sequence values.
Now your error is pointing at a varchar(1), one byte size varchar, as the source of the issue. I would look at your query and tables to find varchar(1) sized columns and see what is being put into them. You will likely need to understand what length string is being inserted and why to find your solution.
Remember that in Redshift non-ascii characters are stored in more than one byte and that varchar(1) is a one byte column. So if you try to insert a single non-ascii character into this column it will not fit. Two functions could be of use if the issue is related to non-ascii characters:
length() - find the length of a string in CHARACTERS
octet_length() - find the length of a string in BYTES
If you still have trouble finding the issue you may want to post the DDL and SQL you are using so you can get more specific recommendations.

If statements and extracting values

I have a result set that looks like
{add=[44961373 (1645499799657512961), 44961374 (1645499799658561538), 44962094 (1645499799659610114), 44962095 (1645499799659610117), 44962096 (1645499799660658689), 44962097 (1645499799660658691), 44962098 (1645499799661707264), 44962099 (1645499799661707267), 44962100 (1645499799662755840), 44962101 (1645499799662755843), ... (592 adds)]}
If the add=[ array has more than 10 elements in it. Then it will put (x adds) at the end of the statement to show how many actual adds there were. IF it has less than 10, then it wont put the (x adds) statement. I am wanting timechart and also single value these outputs to a dashboard(separate modules).
I can get one or the other but I would like to use from logic to figure out which one to report.
index="index" host="host*" path=/update | eval count=mvcount(add) | stats count
will get the count of the array
index="index" host="host*" path=/update | stats sum(Adds)
will get the value of the (x adds). Adds is a 'extracted field'.
How do I get either or? If add array >10, use sum(Adds), in the same breath.
index="index" host="host*" path=/update | eval count=mvcount(add)
| eval first_ten="{add=[".mvjoin(mvindex(add,0,9), ",")." (" (count-10)." adds)}"
| eval msg=if(count<10,_raw,first_ten)
You can do something like this. Get the count of adds, create a new string with the first 10 elements only, with the count-10 adds message at the end. Then, depending on the actual count, either use the original (_raw), or the new message.

about .replace in Velocity

I have a Pipe | delimited file I'm sending out and in a string field the client is using Pipes as just a random character to separate points.
Example. This is what text they have in the field.
Encore AWD | Leather | Navigation | Sunroof | Back Up Camera | USB | Bluetooth
I need to replace the | with a - and this is the code I'm trying.
#set ($va.list_comment = $va.listing_comment.replace("|", "-"))
it is still outputting the | characters.
Anyone have any ideas what I could be doing wrong here?
You cannot assign a new value into an object. If you're using the latest version of Velocity, then such an assignment would work if there is a setList_comment method, or if $va is a Map. Otherwise, you would have to just create a new variable that would host the new value and use it:
#set ($fixedListing = $va.listing_comment.replace("|", "-"))
$fixedListing
Or if you don't need that value for anything else than just printing it once, skip the assignment completely and just print the outcome:
$va.listing_comment.replace("|", "-")
If that still doesn't work, make sure the value returned is indeed a java.lang.String and not something else:
$va.listing_comment.class