Fast Update For a Table - sql-update

I need to update CustomerValue table for 4000 customers for 20 different options.
It exactly comes out to 80,000 records.
I wrote this:
Update CustomerValue Set Value = 100 where Option in
(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20);
But it is taking time. I was wondering If I can use PL/SQL block or any other way to make it run faster. Few minutes are okay....It ran for 11 minutes so I cancelled it.
Note: There is no ROWID in that table.
Thanks

If your condition is regular like this
(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20)
1100000 rows 6 second.
UPDATE CustomerValue
SET DEGER = 100
WHERE Value >= 1 AND Value <=20

Related

Checking the time in ORACLE APEX 5.1

I'm am new to apex and I'm working on a food ordering application where customers are permitted to change their order details only up to 15 minutes after the order has been placed. How can I implement that ?
Create a validation on date item. Calculate difference between SYSDATE (i.e. "now") and order date. Subtracting two DATE datatype values results in number of days, so multiply it by 24 (to get hours) and by 60 (to get minutes). If that result is more than 15, raise an error.
To provide an alternative to Littlefoot's answer, timestamp arithmetic returns interval literals, if you use SYSTIMESTAMP instead your query could be:
systimestamp - order_date < interval '15' minute
or, even using SYSDATE something like:
order_date > sysdate - interval '15' minute
One note, the 15 minutes seems somewhat arbitrary (a magic number) it relies on the order not starting to be processed within that time limit. It feels more natural to say something like "you can change your order until the kitchen has started cooking it". There's no need for any magic numbers then and considerably less wastage (either of the customers time always waiting 15 minutes or of the kitchen's resources cooking something they may then have to discard).

SAS group rows based on time intervals

I am doing some analysis on a dataset with a variable named "time". The time is in the format of HH:MM:SS.
However, now I want to group the rows based on 5 secs and 1 min time intervals respectively in two different analysis.
I checked some StackOverflow posts online, but it seems that they used a variable called "Interval" which starts at 1 and increases when the interval ends.
data _quotes_interval;
set _quotes_processed;
interval = intck('MINUTE1',"09:30:00"t,time)+1;
run;
What I want to do is to keep the time format. For example, if the original time is 9:00:30, and I am doing the 1 min interval, I want to change the time to 9:00:00 instead.
Is there a way to do this?
SAS stores time as the number of seconds. Since you want to round to the nearest minute or 5 minute, a translation would be, 5*60 = 300 & 60 seconds. The SAS round function supports this.
time_nearest_minute = round(time, 60);
time_nearest_minute5 = round(time, 300);
Edit based on comment:
Time_nearest_second5 = round(time, 5);
For the processing by minute, you can use
data _quotes_interval / view = _quotes_interval;
set _quotes_processed;
interval = intnx('MINUTE', time, 0, 'begin');
run;
intnx is by default used to add or substract time intervals. Here I add 0 binutes, but that is just because I need the function for its fourth parameter, which specifies to go back to the 'begin' of the interval of 1 minute.
PS: For preformance reasons, I would use the view= option, to create a view on the existing data instead of copying all data.
For processing by 5 second intervals
try interval = intnx('SECOND5', time, 0, 'begin');
Disclaimer:
I do not have SAS on this computer. If it does not work, react in a comment and I will test it at work.

How to record total values with rrdtool

I'm pretty sure this question has been asked several times, but either I did not find the correct answer or I didn't understand the solution.
To my current problem:
I have a sensor which measures the time a motor is running.
The sensor is reset after reading.
I'm not interested in the time the motor was running the last five minutes.
I'm more interested in how long the motor was running from the very beginning (or from the last reset).
When storing the values in an rrd, depending on the aggregate function, several values are recorded.
When working with GAUGE, the value read is 3000 (10th seconds) every five minutes.
When working with ABSOLUTE, the value is 10 every five minutes.
But what I would like to get is something like:
3000 after the first 5 minutes
6000 after the next 5 minutes (last value + 3000)
9000 after another 5 minutes (last value + 3000)
The accuracy of the older values (and slopes) is not so important, but the last value should reflect the time in seconds since the beginning as accurate as possible.
Is there a way to accomplish this?
I dont know if it is useful for ur need or not but maybe using TREND/TRENDNAN CDEF function is what u want, look at here:
TREND CDEF function
I now created a small SQLite database with one table and one column in that tabe.
The table has one row. I update that row every time my cron job runs and add the current value to the current value. So the current value of the one row and column is the cumualted value of my sensor. This is then fed into the rrd.
Any other (better) ideas?
The way that I'd tackle this (in Linux) is to write the value to a plain-text file and then use the value from that file for the RRDTool graph. I think that maybe using SQLite (or any other SQL server) just to keep track of this would be unnecessarily hard on a system just to keep track of something like this.

Querying geonames for rows 1000-1200

I have been querying Geonames for parks per state. Mostly there are under 1000 parks per state, but I just queried Conneticut, and there are just under 1200 parks there.
I already got the 1-1000 with this query:
http://api.geonames.org/search?featureCode=PRK&username=demo&country=US&style=full&adminCode1=CT&maxRows=1000
But increasing the maxRows to 1200 gives an error that I am querying for too many at once. Is there a way to query for rows 1000-1200 ?
I don't really see how to do it with their API.
Thanks!
You should be using the startRow parameter in the query to page results. The documentation notes that it takes an integer value (0 based indexing) and should be
Used for paging results. If you want to get results 30 to 40, use startRow=30 and maxRows=10. Default is 0.
So to get the next 1000 data points (1000-1999), you should change your query to
http://api.geonames.org/search?featureCode=PRK&username=demo&country=US&style=full&adminCode1=CT&maxRows=1000&startRow=1000
I'd suggest reducing the maxRows to something manageable as well - something that will put less of a load on their servers and make for quicker responses to your queries.

Multiple rows with a single INSERT in SQLServer 2008

I am testing the speed of inserting multiple rows with a single INSERT statement.
For example:
INSERT INTO [MyTable] VALUES (5, 'dog'), (6, 'cat'), (3, 'fish)
This is very fast until I pass 50 rows on a single statement, then the speed drops significantly.
Inserting 10000 rows with batches of 50 take 0.9 seconds.
Inserting 10000 rows with batches of 51 take 5.7 seconds.
My question has two parts:
Why is there such a hard performance drop at 50?
Can I rely on this behavior and code my application to never send batches larger than 50?
My tests were done in c++ and ADO.
Edit:
It appears the drop off point is not 50 rows, but 1000 columns. I get similar results with 50 rows of 20 columns or 100 rows of 10 columns.
It could also be related to the size of the row. The table you use as an example seems to have only 2 columns. What if it has 25 columns? Is the performance drop off also at 50 rows?
Did you also compare with the "union all" approach shown here? http://blog.sqlauthority.com/2007/06/08/sql-server-insert-multiple-records-using-one-insert-statement-use-of-union-all/
I suspect there's an internal cache/index that is used up to 50 rows (it's a nice round decimal number). After 50 rows it falls back on a less efficient general case insertion algorithm that can handle arbitrary amounts of inputs without using excessive memory.
the slowdown is probably the parsing of the string values: VALUES (5, 'dog'), (6, 'cat'), (3, 'fish) and not an INSERT issue.
try something like this, which will insert one row for each row returned by the query:
INSERT INTO YourTable1
(col1, col2)
SELECT
Value1, Value2
FROM YourTable2
WHERE ...--rows will be more than 50
and see what happens
If you are using SQL 2008, then you can use table value parameters and just do a single insert statement.
personally, I've never seen the slowdown at 50 inserts records even with regular batches. Regardless we moved to table value parameters which had a significant speed increase for us.
Random thoughts:
is it completely consistent when run repeatedly?
are you checking for duplicates in the 1st 10k rows for the 2nd 10k insert?
did you try batch size of 51 first?
did you empty the table between tests?
For high-volume and high-frequency inserts, consider using Bulk Inserts to load your data. Not the simplest thing int he world to implement and it brings with it a new set of challenges, but it can be much faster than doing an INSERT.