I am collecting info from weather sensors(every 5 mins) and through the days and months, the amount of records is huge and render the chart takes for ever....It is working ,but the delay is too high... So instead of displaying all the values at once, I want to make the graph based on range of dates selected, and get data accordingly.
I am using python , sqlite3 and Flask and trying to copy this example from Highcharts
https://jsfiddle.net/gh/get/library/pure/highcharts/highcharts/tree/master/samples/stock/demo/lazy-loading/
In the example they use php , which here is the code
https://github.com/highcharts/highcharts/blob/master/samples/data/from-sql.php
This is my setextreme function in the html page
function setExtremes(e)
// detect zoom button
{
if(typeof(e.rangeSelectorButton)!== 'undefined')
{
if (e.rangeSelectorButton.type=='all')
{
var chart = Highcharts.charts[0];
chart.showLoading('Loading data from server...');
$.getJSON('http://192.168.0.74:5000/rango?start=' + Math.round(e.min) +'&end=' + Math.round(e.max) + '&callback=?', function (data) {
// Arrange data
alert('I am in')
for (i = 0; i< data.length; i++)
{
processed_json_temperatura.push(data[i][0], data[i][1]);
processed_json_presion.push(data[i][0], data[i][2]);
processed_json_humedad.push(data[i][0], data[i][3]);
processed_json_lluvia.push(data[i][0], data[i][4]);
processed_json_horas_frio.push(data[i][0], data[i][5]);
}
temperatura_matriz = listToMatrix( processed_json_temperatura,2);
presion_matriz = listToMatrix( processed_json_presion,2);
humedad_matriz = listToMatrix( processed_json_humedad,2);
lluvia_matriz = listToMatrix( processed_json_lluvia,2);
horas_frio_matriz = listToMatrix(processed_json_horas_frio,2);
//update graphs with new values
chart.series[0].setData(temperatura_matriz);
chart.series[1].setData(presion_matriz);
chart.series[2].setData(humedad_matriz);
chart.series[3].setData(lluvia_matriz);
chart.series[4].setData(horas_frio_matriz);
chart.hideLoading();
});
}
}
}
On server side, this is my python code with flask
#app.route("/rango")
def rango():
start=request.args.get('start')
end=request.args.get('end')
range = int(end)/1000 - int(start)/1000
# find the right table
# two days range loads minute data
if (range < 2 * 24 * 3600 * 1000):
connection = sqlite3.connect("/var/www/nueva_estacion.db")
cursor = connection.cursor()
cursor.execute("select (julianday(timestamp)-2440587.5)*86400000, round(temp1,2) as temp1, presion, round(humedad,2) as humedad, lluvia,aux1 from raw_all_sensors where datetime(timestamp) >=datetime('now', '-6 Day')")
results = cursor.fetchall()
print json.dumps(results)
return json.dumps(results)
When I click the zoom button(on highstock actually) I do see the message " Loading data from Server ..." but never shows the new graphs and never get "I am in" console message..
When monitoring the messaging on server side, I do get
192.168.0.122 - - [22/May/2019 19:47:22] "GET /rango?start=1558050049000&end=1558551372000&callback=jQuery18307335687220269529_1558565228146&_=1558565238543 HTTP/1.1" 200 -
where is clear start date, end date and then I do not understand the callback and the numbers that follow.
Just in case, this is the info coming from server
[1558546562999.989, 17.82, 1021, 76.15, 0, 1]
Can anyone bring some light here?
Related
I have created a Map/Reduce script which will fetch customer invoices and delete it. If I am creating saved search in UI based on the below criteria, it shows 4 million records. Now, if I run the script, execution stops before completing the "getInputData" stage as maximum storage limit of this stage is 200Mb. So, I want to fetch first 4000 records out of 4 million and execute it and schedule the script for every 15 mins. Here is the code of first stage (getInputData) -
var count=0;
var counter=0;
var result=[];
var testSearch = search.create({
type: 'customrecord1',
filters: [ 'custrecord_date_created', 'notonorafter', 'startOfLastMonth' ],
columns: [ 'internalid' ]
});
do{
var resultSearch = testSearch.run().getRange({
start : count,
end : count+1000
});
for(var arr=0;arr<resultSearch.length;arr++){
result.push(resultSearch[arr]);
}
counter = count+counter;
}while(resultSearch.length >= 1000 && counter != 4000);
return result;
During creating the saved search, it is taking long time, is there any work around where we can filter first 4000 records during saved search creation?
Why not a custom mass update?
It would be a 5-10 line script that grabs the internal id and record type of the current record in the criteria of the mass update then deletes the record.
I believe this is what search.runPaged() and pagedData.fetch() is for.
search.runPaged runs the current search and returns summary information about paginated results - it does not give you the result set or save the search.
pagedData.fetch retrieves the data within the specified page range.
If you are intent on the Map/Reduce you can just return your created search. Netsuite will run it and pass each line to the next phase. You can even use a saved search where you limit the number of lines and then in your summarize phase re-trigger the script if there's anything left to do.
The 4k record syntax though is:
var toDelete = [];
search.run().each(function(r){
toDelete.push(r.id);
return toDelete.length < 4000;
});
return toDelete;
finally I normally do this as scheduled mass update. It will tend to interfere less with any production scheduled and map/reduce scripts.
/**
* #NApiVersion 2.x
* #NScriptType MassUpdateScript
*/
define(["N/log", "N/record"], function (log, record) {
function each(params) {
try {
record.delete({
type: params.type,
id: params.id
});
log.audit({ title: 'deleted ' + params.type + ' ' + params.id, details: '' });
}
catch (e) {
log.error({ title: 'deleting: ' + params.type + ' ' + params.id, details: (e.message || e.toString()) + (e.getStackTrace ? (' \n \n' + e.getStackTrace().join(' \n')) : '') });
}
}
return {
each:each
};
});
I am facing a problem with Facebook meseger chat- bot ,problem is with Context variable storing and update .
My code is divided in two parts
_________**********_________
part 1:
var index = 0;
Facebookcontexts.forEach(function(value) {
console.log(value.From);
if (value.From == sender_psid) {
FacebookContext.context = value.FacebookContext;
console.log("Inside foreach "+JSON.stringify(FacebookContext.context));
contextIndex = index;
}
index = index + 1;
});
Here I have created an array named Facebookcontexts to store contexts for different users.This is where I am getting position of the user in a Facebookcontexts array which is used for later .
_________**************_____________
part 2:
if((FacebookContext.context==null)||(Facebookcontexts.find(v=>v.From==sender_psid)==undefined)){
Facebookcontexts.push({"From":sender_psid,"FacebookContext":response.context})
console.log("I am where sender is unknmown"+JSON.stringify(Facebookcontexts)+"\n"+Facebookcontexts.length);
}
else if(Facebookcontexts.find(v=>v.From==sender_psid)!=undefined){
Facebookcontexts[contextIndex].FacebookContext=response.context;
console.log("I am at where I know the sender"+JSON.stringify(Facebookcontexts)+"\n"+Facebookcontexts.length);
}
I am deciding to create a new record or update old one in if and else
Issue:
My Issue is every time if((FacebookContext.context==null)||(Facebookcontexts.find(v=>v.From==sender_psid)==undefined)) is getting checked and for that array length is 1 all the time
I will look for some help from you guys.
Thanks ins advance
I have a message table which I would like to get the last 10 messages for that user and if they click more it would retrieve another 10 until there were no more message left. I could not see how to do this, so far I am able to get message based on time, but this is not exactly what I want. Reading through the documentation I can see it a method called LastEvaluatedKey, but where and how do I use this I can't find a working example of this. This is my code for time:
long startDateMilli = (new Date()).getTime() - (15L*24L*60L*60L*1000L);
long endDateMilli = (new Date()).getTime() - (5L*24L*60L*60L*1000L);
java.text.SimpleDateFormat df = new java.text.SimpleDateFormat(
"yyyy-MM-dd'T'HH:mm:ss.SSS'Z'");
String startDate = df.format(startDateMilli);
String endDate = df.format(endDateMilli);
QuerySpec spec = new QuerySpec()
.withProjectionExpression("to,fr,sta,cr")
.withKeyConditionExpression("to = :v_to and cr between :v_start_dt and :v_end_dt")
.withValueMap(new ValueMap()
.withString(":v_id", username)
.withString(":v_start_dt", startDate)
.withString(":v_end_dt", endDate));
ItemCollection<QueryOutcome> items = table.query(spec);
System.out.println("\nfindRepliesPostedWithinTimePeriod results:");
Iterator<Item> iterator = items.iterator();
while (iterator.hasNext()) {
System.out.println(iterator.next().toJSONPretty());
}
How can I modify this to eliminate the time based pagination and use instead a last 10 messages type of pagination?
I try to implement "streaming contents" on my Pythonanywhere account.
It looks more or less to what is shown there:
cf. http://flask.pocoo.org/docs/0.10/patterns/streaming/
except that my view is calculating a complex process for maybe one minute and yields its data to my template, where a script is supposed to update some progress bars (''source.onmessage'').
This works perfectly on my development machine, but not on my pythonanywhere account. On this server, the process looks jammed (progress bars are never updated, except at the very end where the suddunly grow from 0% to 100%), although everything goes well under the hood, e.g. my print statements are correctly rendered into my server logs).
In the snippet quoted above, there is a note:
Note though that some WSGI middlewares might break streaming, so be
careful there in debug environments with profilers and other things
you might have enabled.
Could it be the problem here? and would there be a workaround?
JS code from my jinja2 template:
<script type="text/javascript">
/* progress bar */
var source = new EventSource("{{ url_for('BP.run', mylongprocess_id=mylongprocess_id) }}");
source.onmessage = function(event) {
console.log(event.data);
var data = event.data.split("!!");
var nodeid = data[0];
var process = data[1];
var process_status = data[2];
var postpro = data[3];
var postpro_status = data[4];
$('.pb1').css('width', process+'%').attr('aria-valuenow', process);
$('.pb2').css('width', postpro+'%').attr('aria-valuenow', process);
document.getElementById("process_status").innerHTML = process_status;
document.getElementById("postpro_status").innerHTML = postpro_status;
document.getElementById("nodeid").innerHTML = nodeid;
if (postpro >= 100) {
setTimeout(function() {
console.log("progress is finished!");
document.getElementById("status").innerHTML = "redirecting to {{url_for('.view_sonix_result', mylongprocess_id=mylongprocess_id)}}";
window.location.replace("{{url_for('.terminate_analysis', mylongprocess_id=mylongprocess_id)}}");
}, 2); // / setTimeout function
} // /if
else {
document.getElementById("status").innerHTML = "pending...";
} // /else
} // /function
</script>
My (simplified) view:
#BP.route('/run/<int:mylongprocess_id>')
#login_required
def run(mylongprocess_id):
mylongprocess = mylongprocess.query.get_or_404(mylongprocess_id)
project = Project.query.get_or_404(mylongprocess.project_id)
check_rights(current_user, project, 'user', 404)
A, lcs = _create_analysis(mylongprocess)
#copy_current_request_context
def gen(mylongprocess, nodeid, store_path):
print('now runing %s' % A)
for (loopnb, total_loops, pct, lclabel) in A.runiterator(lcs):
print('ran %d/%d (%.1f%%) "%s"' % (loopnb, total_loops,
pct, lclabel))
progress = ('data: %s!!%f!!%s!!%f!!%s\n\n' %
(nodeid, pct, lclabel, 0, 'waiting...'))
yield progress
print('now postprocessing %s' % A)
postpro = load_node(store_path, node_id=nodeid)
for step, total, pct, action in postpro._builditer(target='web',
buildfile=None):
progress = ('data: %s!!%f!!%s!!%f!!%s\n\n' %
(nodeid, 100, 'ok', pct, action.replace('_', ' ')))
yield progress
print('now terminating %s' % A)
_terminate_analysis(A, mylongprocess)
return Response(gen(mylongprocess, mylongprocess.nodeid), mimetype='text/event-stream')
Your traffic goes through an nginx proxy when it is hosted on PythonAnywhere and nginx buffers the response unless specified otherwise.
To get everything to flush,
give your flask responses headers response.headers['X-Accel-Buffering'] = 'no'
have a '\n' at the end of the string you are yielding because python also buffers till end of line.
What is the most efficient way of reading in lots of images in CF / Railo and checking their width and height?
In my app, I need to typically read in about 20 images + and at the moment this takes up to 14 seconds to complete. A bit too long really.
theImageRead = ImageNew(theImageSrc);
if ( imageGetWidth(theImageRead) > 100 ) {
writeOutput('<img src="' & theImageSrc & '" />');
}
Images are read from a list of absolute URL's. I need to get images specified over a certain dimension.
If there's a quicker solution to this then I'd love to get your insight. Perhaps underlying java methods?
I am also using jSoup if there's anything in that which could help.
Thanks,
Michael.
I don't believe there's any way to determine the pixel dimensions of an image without reading the bytes and creating an image object. The main bottleneck here will be the http request overhead.
that said there are a few ways to speed up what you're trying to do.
use threads to concurrently request images, then when all threads have finished processing output the images.
If you display the same image or set of images more than once cache it. If you don't want to cache the actually image you can cache the metadata to avoid having to perform a http request for every image.
decide if you need to output all the images to the page immediately, or could some or all of these be deferred and loaded via and ajax request
I have written this utility function quite a while ago (it runs on older ColdFusion versions, too). Maybe it helps.
Note that this requires the Java Advanced Imaging Image I/O Tools (Jai-imageio). Download the .jar and put it in your class path (restarting CF is necessary).
/**
* Reads basic properties of many types of images. Values are
* returned as a struct consisting of the following elements:
*
* Property names, their types and default values:
* ImgInfo.width = 0 (pixels)
* ImgInfo.height = 0 (pixels)
* ImgInfo.size = 0 (bytes)
* ImgInfo.isGrayscale = false (boolean)
* ImgInfo.isFile = false (boolean)
* ImgInfo.success = false (boolean)
* ImgInfo.error = "" (string)
*
* #param FilePath Physical path to image file.
* #return A struct, as described.
*/
function GetImageProperties(FilePath) {
var ImgInfo = StructNew();
var jImageIO = CreateObject("java", "javax.imageio.ImageIO");
var jFile = CreateObject("java", "java.io.File").init(FilePath);
var jBufferedImage = 0;
var jColorSpace = 0;
ImgInfo.width = "";
ImgInfo.height = "";
ImgInfo.fileSize = 0;
ImgInfo.isGrayscale = false;
ImgInfo.isFile = jFile.isFile();
ImgInfo.success = false;
ImgInfo.error = "";
try {
jBufferedImage = jImageIO.read(jFile);
ImgInfo.fileSize = jFile.length();
ImgInfo.width = jBufferedImage.getWidth();
ImgInfo.height = jBufferedImage.getHeight();
jColorSpace = jBufferedImage.getColorModel().getColorSpace();
ImgInfo.isGrayscale = (jColorSpace.getType() eq jColorSpace.TYPE_GRAY);
ImgInfo.success = true;
}
catch (any ex) {
ImgInfo.error = ToString(ex);
}
jImageIO = JavaCast("null", "");
jFile = JavaCast("null", "");
jBufferedImage = JavaCast("null", "");
jColorSpace = JavaCast("null", "");
return ImgInfo;
}
Use like:
imageInfo = GetImageProperties(theImageSrc);
if (imageInfo.success and imageInfo.width > 100)
writeOutput('<img src="#HTMLEditFormat(theImageSrc)#" />');
}