How to request for more 100 device in wso2iot? - wso2

How to increase the offet and limit more than 100 ? At present we are unable to request more than 100 limit. Can you please help me ???

Pagination works like below;
- offset = 0, limit = 100 -> 0-100 devices
- offset = 101, limit = 100 -> 100-200 devices
- offset = 201, limit = 100 -> 200-300 devices
- ....continues....
As you can see above limit can be a positive integer(less than 100). Most of the time limit is constant, and we keep increasing the offset for the next API calls.
For example;
let offset = 0 limit = 100 when starting; you can do offset = offset + limit for the next call.
- https://192.168.1.18:8243/api/device-mgt/v1.0/devices?offset=0&limit=100
- https://192.168.1.18:8243/api/device-mgt/v1.0/devices?offset=101&limit=100
- https://192.168.1.18:8243/api/device-mgt/v1.0/devices?offset=201&limit=100

Related

Scheduling Problem - solver does not see an obvious solution

The problem consists of a set of tasks requiring some resource from renewable resource pool.
The goal is to complete all tasks in the given planning horizon, so that
the number of resource used is minimised.
The main constraint is no overlap of tasks for the same resource.
The code below does not scale well if I increase the number of tasks.
There should be a simple optimal solution: to allocate the resource 1 to 15 to all tasks. But the solver seems to struggle to find it when n_tasks is higher than say 1000. I have tried a number of things with the search annotations, but no breakthrough so far.
Here is the model and data:
n_tasks = 3000;
duration = [1 | i in 1..n_tasks];
n_resources = 35;
number_resource_needed = [15 | i in 1..n_tasks];
t_max = 18000;
include "cumulative.mzn";
%-----------------------------------------------------------------------------%
% MODEL PARAMETERS
% Tasks
int: n_tasks; % The number of tasks
set of int: Tasks = 1..n_tasks; % The set of all tasks
array[Tasks] of int : duration ; % The task durations
% Resources
int: n_resources;
set of int: Resources = 1..n_resources;
array[Tasks] of int: number_resource_needed; % The resource requirements
% Maximum duration
int: t_max;
%~~~~~~~~~~~~~~~~~
% MODEL VARIABLES.
%~~~~~~~~~~~~~~~~~
array [Tasks] of var 1..t_max: start; % The start times
array[Tasks, Resources] of var 0..1: resource_allocation; %Selection of resources per task.
%~~~~~~~~~~~~~~~~~
% CONSTRAINTS
%~~~~~~~~~~~~~~~~~
% Number of Resources per Task constraint
constraint forall(t in Tasks) (
sum(r in Resources) (resource_allocation[t, r]) = number_resource_needed[t]
);
% Constraint allocated to only one task at a time
constraint forall(r in Resources)(
cumulative(start, duration, [resource_allocation[t, r] | t in Tasks], 1)
);
var int: objective = sum(r in Resources) (r * max([resource_allocation[t, r] | t in Tasks]));
var int: nb_active_workers = sum(r in Resources) (max([resource_allocation[t, r] | t in Tasks]));
% solve minimize objective;
solve :: seq_search([
int_search(resource_allocation, input_order, indomain_max),
int_search(start, input_order, indomain_min),
])
minimize objective;
output ["objective = \(objective) \n"];
output ["nb_active_workers = \(nb_active_workers) \n"];
I have used Chuffed, and different options.
My results:
n_tasks = 100, optimal found in 34s
n_tasks = 500, optimal found in 3m 43s
n_tasks = 3000, first solution 2m 40s, optimal not found at 4m 00s
I would like to see a first solution faster, and also the optimal faster.

Creating a List of Event Counts

I am trying to loop through my data and for every time a threshold is exceeded, i want to rasie a flag and count it. At the end i want an output of a data frame having those rows that were flagged and their corresponding information
I have gotten this far..
frame_of_reference = frame_of_reference.apply(pd.to_numeric, errors='coerce')
window_size = 10
for i in range(1, len(frame_of_reference['Number_of_frames']), window_size):
events_ttc = [1 if 0.0 < frame_of_reference['TTC_radar'].any() <= 1.5 else 0]
events_ttc
but instead of giving me a dataframe, it only gives me a one or a zero.
Screenshot:

How to trace the nodes movement time in ns3?

So basically, I used RandomwayPoint model in NS3 and I got the result of nodes like this:
/NodeList/5/$ns3::MobilityModel/CourseChange x = 10, y = 20
/NodeList/6/$ns3::MobilityModel/CourseChange x = 30, y = 40
/NodeList/7/$ns3::MobilityModel/CourseChange x = 50, y = 80
/NodeList/5/$ns3::MobilityModel/CourseChange x = 10, y = 20
/NodeList/6/$ns3::MobilityModel/CourseChange x = 30, y = 40
/NodeList/7/$ns3::MobilityModel/CourseChange x = 50, y = 80
/NodeList/5/$ns3::MobilityModel/CourseChange x = 10, y = 20
/NodeList/6/$ns3::MobilityModel/CourseChange x = 30, y = 40
/NodeList/7/$ns3::MobilityModel/CourseChange x = 50, y = 80
At time 2s client sent 1024 bytes to 10.1.2.4 port 9
At time 2.01596s server received 1024 bytes from 10.1.3.3 port 49153
At time 2.01596s server sent 1024 bytes to 10.1.3.3 port 49153
At time 2.02464s client received 1024 bytes from 10.1.2.4 port 9
......
But how to record the time of the each node's movement?
I think the most relevant code is about using Simulator:: Now().GetSeconds()
Here is the code I wrote:
std::ostringstream oss2(std::ostringstream::ate);
oss2.str("TimeStamp:");
oss2 << Simulator::Now().GetSeconds ();
std::cout << oss2.str() << "\t";
But I got the result equals to 0s. I felt confused about this, I would appreciate that if anyone can offer me a better solution and help me figure this out.
ManyThanks.
The logic is correct, Simulator::Now() provides the time. You can print it in a single line, no need for four!
std::cout << "TimeStamp:" << Simulator::Now().GetSeconds() << "\t";
The fact that you got t=0sec, probably is due to the fact that the nodes do not 'change' after that. CourseChange callback is fired only during a change in the speed or velocity (direction). If the node is moving with constant speed, it will not fire.
Following your comment in the previous post, you said you have used ListPositionAllocator. If you only have a single entry in the list, then it will not change apart from the initial at t=0 (what you see in the output).

Django-PostgresPool max poolsize

Here is my setting :
DATABASE_POOL_ARGS = {
'max_overflow': 3,
'pool_size': 3,
'recycle': 300
}
I set pool_size =3
But I look up the result in postgres( SELECT sum(numbackends) FROM pg_stat_database; )
The number still over 3 , How can I set the connection ??
I want to set the max to 100,And let all requests share these 100 connections to communicate with postgresql
pool_size is the number of idle connections (ie at least these many will always be connected), and max_overflow is the maximum allowed on top of that.
So the total maximum is pool_size + max_overflow. You should set pool_size to the minimum number you think you will typically need, and max_overflow to 100 - pool_size.

pycrypto is slow encrypting and decrypting

In practice, I select an executable. Size 20Mb.
I read the content using file.read(size=16).
If length of the returned byte string is less than 16 I fill the rest with \0 (NULL).
f = open("./installer.exe","rb")
obj = AES.new(b"0123456789012345",AES.MODE_CBC, b"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0")
bs = b""
t = f.read(16)
while t != b"":
if len(t) < 16:
t = t + b"\0" * (16 - len(t)) # if < 16 bytes using padding
bs = bs + obj.encrypt(t)
else:
bs = bs + obj.encrypt(t)
t = f.read(16)
then, bs contents the byte string of ALL content encrypted with 0123456789012345
I realise the mechanism of reading file first, then I encrypt the content as seen in the above piece of code (using obj.encrypt()). Then I write a new file with the content encrypted. The I read the data of encrypted file and by a similar procedure decrypt the data using obj.decrypt in intervals of 16 bytes and then I write a new file with the decrypted data.
This takes approximately 3 minutes.
¿It's fast, slow, or expected?
According to what I saw, the module is written in C. ¿Maybe should I use Cython embedded to make it faster?
How PGP can supposedly decrypt higher amounts of data in real time, for example, in an encrypted virtual disk?
edit:
This take almost same:
obj = AES.new(b"0123456789012345",AES.MODE_CBC, b"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0")
bs = b""
t = f.read(16)
while t != b"":
if len(t) < 16:
t = t + b"\0" * (16 - len(t))
bs = bs + t
else:
bs = bs + t
t = f.read(16)
bse = obj.encrypt(bs)
Ok. The problem was the size of the buffers encrypted. I decided to use 64000 bytes strings.
The procedure its simple. total size / string segments -> encrypt. And in the last segment, if the size of the segment is inferior to 64000 AND NOT multiple of 16, finds the nearest multiple and the remaining space is filled
bs = b""
dt = f.read()
dtl = len(dt)
dtr = ( dtl / 64000 ) + 1
for x in range(0, dtr):
if x == dtr-1:
i1 = 64000 * x
dst = dtl - i1
i = math.ceil(dst / 16.0) * 16
dst = i - dst
buf = dt[i1:] + (b"\0" * int(dst))
bs = bs + obj.encrypt(buf)
else:
i1, i2 = 64000 * x , 64000 * (x+1)
bs = bs + obj.encrypt(dt[i1:i2])
Now takes 10 seconds.
Thanks for all.