I have a request that returns a response body with:
{
"age" : 20
}
I want to verify that the age value is between 19 and 30. How can I do this with Postman?
You can check this with one line of code:
pm.test("Check age is between 19 and 30", function () {
pm.expect(pm.response.json().age).gte(19).lte(30);
});
gte() = greater than or equal to
lte() = less than or equal to
if you just want greater than or less than, you can use gt() and lt()
On the Tests tab for your request you can use either of these tests (they do the same thing). The first test reads better and makes use of native most and least chai.js constructs, while the second one tests the age boundary as a boolean and then checks that it is true
const jsonData = pm.response.json();
const age = Number(jsonData.age)
pm.test("Check age is between 19 and 30 - most/least", function () {
pm.expect(age).to.be.at.most(30);
pm.expect(age).to.be.at.least(19);
});
pm.test("Check age is between 19 and 30 - boolean check", function () {
const ageCheck = age >= 19 && age <= 30
pm.expect(ageCheck).to.be.true
});
If this is the first time you're looking into Postman tests, this is what it should look like in the Tests tab of your request:
Related
I have soma data, starting from A10 to column M, until the 59th row.
I have some dates in column F10:F that are text strings, converted to official dates in column N (here the question with the process)
M3 is set to =NOW().
In cell N3 I have: =M3+14.
I want to delete all the rows, with a date in column N10:N that comes before [today + 2 weeks] (so cell N3).
When I create a script in Apps Script, it doesn't run the if statement, but if I leave it in comments, it can go in the for loop and deletes the rows, so I'm pretty sure the problem is, again, date formatting.
In this question I ask: how do I compare the values of N10:N with N3, in order to delete all the rows that don't meet the condition if(datesNcol <= targetDate)? (in code is written as if (rowData[i] < flatArray))
I leave also a demo sheet with this problem explained in detail and two alternatives (getBackground condition and numeric days condition).
Attempts:
This is a simplified code example:
const gen = SpreadsheetApp.getActiveSpreadsheet().getSheetByName('Generatore');
const bVals = gen.getRange('B10:B').getValues();
const bFilt = bVals.filter(String);
const dataLastRow = bFilt.length;
function deleteExpired() {
dateCorrette(); //ignore, formula that puts corrected dates from N10 to dataLastRow
var dateCorrect = gen.getRange(10,14,dataLastRow,1).getValues();
var targetDate = gen.getRange('N3').getValues();
var flatArray = [].concat.apply([], targetDate);
for (var i = dateCorrect.length - 1; i >= 0; i--) {
var rowData = dateCorrect[i];
if (rowData[i] < flatArray) {
gen.deleteRow(i+10);
}
}
};
If run the script, nothing is deleted.
If I //comment the if function and the closing bracket, it delets all the rows of the list one by one.
I can't manage to meet that condition.
Right now, it logs this [Sun Jan 01 10:33:20 GMT-05:00 2023] as flatArray
and this [Wed Dec 21 03:00:00 GMT-05:00 2022] as dateCorrect[49], so the first row to delete, that is the 50th (is correct for all the dateCorrect[i] dates).
I tried putting a getTime() method in the targetDate variable, but it only functions if there is the getValue() method, not getValues(), so I then don't know how to use getTime() method on rowData, which is based on dateCorrected[i], which have to use the getValues() method. And then it also doesn't accept the flatArray variable, that has to be commented out (or it logs [ ] for flatArray, not the corrected date)
I leave the other attempts in the demo sheet, because I want to prioritize this problem around the date and make it clear in my head.
Thanks for all the help.
DEMO SHEET, ITA Locale time
I don't know how the demo sheet works with Apps Script, I suggest to copy the code in a personal sheet
UPDATE:
I've also tried putting an extra column, with an IF built-in function that writes "del" if the function has to be deleted.
=IF(O10>14;"del";"")
And then
var boba = gen.getRange(10,16,bLast,1).getDisplayValues();
.
.
if (boba[i] == 'del')
This does the job. But I can't understand why the other methods don't work.
Try this. It seems like you do a lot of things that aren't necessary. Unless I'm missing something.
A few notes. I typically do not use global variable, unless absolutely necessary. I don't create a variable for last row unless I have to use that value multiple times in my script. I use the method Sheet.getLastRow(). dataCorrect is a 2D array of 1 column so the second index can only be [0]. And getRange('N4') is a single cell so getValue() is good enough.
function deleteExpired() {
const gen = SpreadsheetApp.getActiveSpreadsheet().getSheetByName('Generatore');
var dateCorrect = gen.getRange(10,14,gen.getLastRow()-9,1).getValues();
var targetDate = gen.getRange('N3').getValue();
for (var i = dateCorrect.length - 1; i >= 0; i--) {
if (dataCorrect[i][0] < targetDate) {
gen.deleteRow(i+10);
}
}
}
Try this:
function delRows() {
const ss = SpreadsheetApp.getActive();
const gsh = ss.getSheetByName('Generatore');
const colB = gsh.getRange('B10:B' + gsh.getLastRow()).getValues();
var colN = gsh.getRange('N10:N' + gsh.getLastRow()).getValues();
var tdv = new Date(new Date().getFullYear(), new Date().getMonth(), new Date().getDate() + 14).valueOf();//current date + 14
let d = 0;
colN.forEach((n, i) => {
if (new Date(n).valueOf() < tdv) {
gsh.deleteRow(i + 10 - d++);
}
});
}
My application manages bookings of a user. These bookings are composed by a start_date and end_date, and their current partition in dynamodb is the following:
PK SK DATA
USER#1#BOOKINGS BOOKING#1 {s: '20190601', e: '20190801'}
[GOAL] I would query all reservations which overlap a search time interval as the following:
I tried to find a solution for this issue but I found only a way to query all items inside a search time interval, which solves only this problem:
I decided to make an implementation of it to try to make some change to solve my problem but I didn't found a solution, following you can find my implementation of "query inside interval" (this is not a dynamodb implementation, but I will replace isBetween function with BETWEEN operand):
import { zip } from 'lodash';
const bookings = [
{ s: '20190601', e: '20190801', i: '' },
{ s: '20180702', e: '20190102', i: '' }
];
const search_start = '20190602'.split('');
const search_end = '20190630'.split('');
// s:20190601 e:20190801 -> i:2200119900680011
for (const b of bookings) {
b['i'] = zip(b.s.split(''), b.e.split(''))
.reduce((p, c) => p + c.join(''), '');
}
// (start_search: 20190502, end_search: 20190905) => 22001199005
const start_clause: string[] = [];
for (let i = 0; i < search_start.length; i += 1) {
if (search_start[i] === search_end[i]) {
start_clause.push(search_start[i] + search_end[i]);
} else {
start_clause.push(search_start[i]);
break;
}
}
const s_index = start_clause.join('');
// (end_search: 20190905, start_search: 20190502) => 22001199009
const end_clause: string[] = [];
for (let i = 0; i < search_end.length; i += 1) {
if (search_end[i] === search_start[i]) {
end_clause.push(search_end[i] + search_start[i]);
} else {
end_clause.push(search_end[i]);
break;
}
}
const e_index = (parseInt(end_clause.join('')) + 1).toString();
const isBetween = (s: string, e: string, v: string) => {
const sorted = [s,e,v].sort();
console.info(`sorted: ${sorted}`)
return sorted[1] === v;
}
const filtered_bookings = bookings
.filter(b => isBetween(s_index, e_index, b.i));
console.info(`filtered_bookings: ${JSON.stringify(filtered_bookings)}`)
There’s not going to be a beautiful and simple yet generic answer.
Probably the best approach is to pre-define your time period size (days, hours, minutes, seconds, whatever) and use the value of that as the PK so for each day (or hour or whatever) you have in that item collection a list of the items touching that day with the sort key of the start time (so you can do the inequality there) and you can use a filter on the end time attribute.
If your chosen time period is days and you need to query across a week then you’ll issue seven queries. So pick a time unit that’s around the same size as your selected time periods.
Remember you need to put all items touching that day (or whatever) into the day collection. If an item spans a week it needs to be inserted 7 times.
Disclaimer: This is a very use-case-specific and non-general approach I took when trying to solve the same problem; it picks up on #hunterhacker 's approach.
Observations from my use case:
The data I'm dealing with is financial/stock data, which spans back roughly 50 years in the past up to 150 years into the future.
I have many thousands of items per year, and I would like to avoid pulling in all 200 years of information
The vast majority of the items I want to query spans a time that fits within a year (ie. most items don't go from 30-Dec-2001 to 02-Jan-2002, but rather from 05-Mar-2005 to 10-Mar-2005)
Based on the above, I decided to add an LSI and save the relevant year for every item whose start-to-end time is within a single year. The items that straddle a year (or more) I set that LSI with 0.
The querying looks like:
if query_straddles_year:
# This doesn't happen often in my use case
result = query_all_and_filter_after()
else:
# Most cases end up here (looking for a single day, for instance)
year_constrained_result = query_using_lsi_for_that_year()
result_on_straddling_bins = query_using_lsi_marked_with_0() # <-- this is to get any of the indexes that do straddle a year
filter_and_combine(year_constrained_result, result_on_straddling_bins)
I'm trying to use Chart.js with a datetime x axis, and I need to adjust all my values by subtracting 5 hours. Here's some of my code:
var timeFormat = 'MM/DD HH:mm';
time: {
format: timeFormat,
tooltipFormat: 'll',
parser: function(utcMoment) {
return moment(utcMoment).utcOffset(5, true);
}
},
Without the parser function, my values are normal (10:00, January 10, 2021), but with the parser function, for some reason my values are set back all the way to 2001. Yes two-thousand-and-one.(10:00, January 10, 2001) Note that the time is not actually changed (So two errors: 1.time not adjusted when it should be. 2:years adjusted when it shouldn't be). Why could this be?
I will assume that the reason you want to roll it back by 5 hours is because of a timezone difference. If that's the case, you should use moment-timezone instead of moment.
With that said, subtracting 5 hours from the current date is actually simpler than what you're doing.
Before feeding a date into moment, you need to convert it to the js Date object like so: new Date('2021-01-10 00:00:00'). Since your parser function accepts the date in m/d H:M format, you would need to append the year to it first.
So here is how your code should look:
parser: function(utcMoment) {
const new_date = utcMoment.split(' ')[0] + '/' + (new Date().getFullYear()) + ' ' + utcMoment.split(' ')[1];
return moment(new Date(new_date)).subtract({hours: 5})
}
I have a table with only one column family, this column has a TTL of 172800 SECONDS (2 DAYS), I need some data to be deleted before the deadline. If I want the value to expire in 5mins, I calculate the expiry time and set the insert date to be 5 mins before expiry time.
I am using the HBase Client for Java to do this.
But the value doesn't seem to expire. Any suggestions on the same?
I used cbt to create the table:
cbt createtable my_table families=cf1:maxage=2d
HColumnDescriptor:
{NAME => 'cf1', BLOOMFILTER => 'ROW', VERSIONS => '2147483647', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => '172800 SECONDS (2 DAYS)', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
Java Code:
import com.google.cloud.bigtable.hbase.BigtableConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.util.Bytes;
import java.io.IOException;
import java.util.Calendar;
import java.util.Date;
public class BigTable {
public static void main(String... args) {
String projectId = "my-gcp-project-id";
String instanceId = "my-bigtable-instance-id";
String tableId = "my-table"; // my-bigtable-table-id
try (Connection connection = BigtableConfiguration.connect(projectId, instanceId)) {
try (Table table = connection.getTable(TableName.valueOf(tableId))) {
HTableDescriptor hTableDescriptor = table.getTableDescriptor();
hTableDescriptor.setCompactionEnabled(true);
byte[] cf1 = Bytes.toBytes("cf1");
byte[] rk1 = Bytes.toBytes("rowkey1");
byte[] q1 = Bytes.toBytes("q1");
HColumnDescriptor cfDescriptor1 = hTableDescriptor.getFamily(cf1);
System.out.println("\n " + cfDescriptor1);
Calendar now = Calendar.getInstance();
Calendar now1 = Calendar.getInstance();
now1.setTime(now.getTime());
long nowMillis = now.getTimeInMillis(); // Current time
now.add(Calendar.SECOND, cfDescriptor1.getTimeToLive()); // Adding 172800 SECONDS (2 DAYS) to current time
long cfTTLMillis = now.getTimeInMillis(); // Time the values in the column family will expire at
now1.add(Calendar.SECOND, 300); // Adding 300 secs (5mins)
long expiry = now1.getTimeInMillis(); // Time the value should actually live
long creationTime = nowMillis + cfTTLMillis - expiry;
System.out.println("\n Date nowMillis:\t" + new Date(nowMillis) + "\n Date creationTime:\t" + new Date(creationTime) + "\n Date cfTTLMillis:\t" + new Date(cfTTLMillis));
//Add Data
Put p = new Put(rk1, creationTime);
p.addColumn(cf1, q1, Bytes.toBytes("CFExpiry_2d_ExpTime_5mins"));
//p.setTTL(creationtime); // What does this do?
table.put(p);
}
} catch (IOException e) {
e.printStackTrace();
}
}}
Calculated dates:
Date nowMillis: Wed Oct 03 10:34:15 EDT 2018
Date creationTime: Fri Oct 05 10:29:15 EDT 2018
Date cfTTLMillis: Fri Oct 05 10:34:15 EDT 2018
The Value is inserted correctly with the correct calculated dates. But doesn't seem to expire? Please correct my concepts if wrong.
Edit:
After the below correction in date calculation, the values do expire.
long nowMillis = System.currentTimeMillis() / 1000;
long cfTTLMillis = nowMillis - cfDescriptor1.getTimeToLive();
long creationTime = (cfTTLMillis + 300) * 1000;
Cloud Bigtable does not garbage collect rows until a compaction occurs. That may happen hours (or possibly a few days) after the expected expiration.
If you want to make sure to not read data that should have expired, please set a timestamp range filter on the data read so that values outside of the allowed range aren't returned in the query.
Alternatively, you'll have to filter them out after the data is returned, but it's much more efficient to filter it out server-side so that the client does not have to download or process it.
I have a map reduce view:
.....
emit( diffYears, doc.xyz );
reduced with _sum.
xyz is then a number which is summed per integer(diffYears).
The output looks roughly like this:
4 1204.9
5 796.19
6 1124.8
7 1112.6
8 1993.62
9 159.26
10 395.41
11 456.05
12 457.97
13 39.80
14 483.68
15 269.469
etc..
What I would like to do is group the results as follows:
Grouping Total per group
0-4 1959.2 i.e add up the xyz's for years 0,1,2,3,4
5-9 3998.5 same for 5,6,7,8,9 ...etc.
10-14 3566.3
I saw a suggestion where a list was used on a view output here: Using a CouchDB view, can I count groups and filter by key range at the same time?
but have been unable to adapt it to get any kind of result.
The code given is:
{
_id: "_design/authors",
views: {
authors_by_date: {
map: function(doc) {
emit(doc.date, doc.author);
}
}
},
lists: {
count_occurrences: function(head, req) {
start({ headers: { "Content-Type": "application/json" }});
var result = {};
var row;
while(row = getRow()) {
var val = row.value;
if(result[val]) result[val]++;
else result[val] = 1;
}
return result;
}
}
}
I substituted var val = row.key in this section:
while(row = getRow()) {
var val = row.value;
if(result[val]) result[val]++;
else result[val] = 1;
}
(although in this case the result is a count.)
This seems to be the way to do it.
(It is like having a startkey and endkey for each grouping which I can do manually, naturally, but not inside a process. Or is there a way of entering multiple start- and endkeys into one GET command???? )
This must be a fairly normal thing to do especially for researchers using statistical analysis.
I assume therefore that it does get done but I cannot locate examples
as far as CouchDB is concerned.
I would appreciate some help with this please or a pointer in the right direction.
Many thanks.
EDIT:
Perhaps the answer lies in a process in 'reduce' to group the output??
You can accomplish what you want using a complex key. The limitation is that the group size is static and needs to be defined in the view.
You'll need a simple step function to create your groups within map like:
var size = 5;
var group = ( doc.diffYears - (doc.diffYears % size)) / size;
emit( [group, doc.diffYears], doc.xyz);
The reduce function can remain _sum.
Now when you query the view use group_level to control the grouping. At group_level=0, everything will be summed and one value will be returned. At group_level=1 you'll receive your desired sums of 0-4, 5-9 etc. At group_level=2 you'll get your original output.