I am doing some research on image compression via discrete cosine transformations and i want to change to quantization tables sizes so that i can study what happens when i change the matrix sizes that i divide my pictures into. The standard sub-matrix size is 8X8 and there is a lot of tables for those dimensions. For example the standard JPEG quantization table (that i use) is:
standardmatrix8 = np.matrix('16 11 10 16 24 40 51 61;\
12 12 14 19 26 58 60 55;\
14 13 16 24 40 57 69 56;\
14 17 22 29 51 87 80 62;\
18 22 37 56 68 109 103 77;\
24 35 55 64 81 104 103 92;\
49 64 78 77 103 121 120 101;\
72 92 95 98 112 100 103 99').astype('float')
I have assumed that the quantization tabel for 2X2 and 4X4 would be:
standardmatrix2 = np.matrix('16 11; 12 12.astype('float')
standardmatrix4 = np.matrix('16 11 10 16;
12 12 14 19;
14 13 16 24;
18 22 37 56').astype('float')
Since the entries in the standard table correspond to the same frequencies in the smaller matrixes.
But what about a quantization with dimentions 16X16, 24X24 and so on. I know that the standard quantization tabels are worked out by experiments and can't be calculated from some formula, but i assume that someone have tried changing the matrix sizes before me! Where can i find these tabels? Or can i just make something up and scale the last entries to higher frequenzies?
Related
I am looking to create a dummy variable for Latin American Countries in my data set which I need to make for a log-log model. I know how to log all of them for my later regression. Any suggestion or help on how to make a dummy variable for the Latin American countries with my data would be appreciated.
data HW6;
input country : $25. midyear sancts lprots lfrac ineql pop;
cards;
CHILE 1955 58 44 65 57 6.743
CHILE 1960 19 34 65 57 7.785
CHILE 1965 27 24 65 57 8.510
CHILE 1970 36 29 65 57 9.369
CHILE 1975 38 58 65 57 10.214
COSTA_RICA 1955 16 7 54 60 1.024
COSTA_RICA 1960 6 1 54 60 1.236
COSTA_RICA 1965 2 1 54 60 1.482
COSTA_RICA 1970 3 1 54 60 1.732
COSTA_RICA 1975 2 3 54 60 1.965
INDIA 1955 81 134 47 52 404.478
INDIA 1960 101 190 47 52 445.857
INDIA 1965 189 845 47 52 494.882
INDIA 1970 133 915 47 52 553.619
INDIA 1975 132 127 47 52 616.551
JAMICA 1955 11 12 47 62 1.542
JAMICA 1960 9 2 47 62 1.629
JAMICA 1965 8 6 47 62 1.749
JAMICA 1970 1 1 47 62 1.877
JAMICA 1975 7 1 47 62 2.043
PHILIPPINES 1955 26 123 48 56 24.0
PHILIPPINES 1960 20 38 48 56 27.898
PHILIPPINES 1965 9 5 48 56 32.415
PHILIPPINES 1970 79 25 48 56 37.540
SRI_LANKA 1955 29 2 73 52 8.679
SRI_LANKA 1960 75 35 73 52 9.879
SRI_LANKA 1965 25 63 73 52 11.202
SRI_LANKA 1970 34 14 73 52 12.532
TURKEY 1955 79 1 67 61 24.145
TURKEY 1960 138 19 67 61 28.217
TURKEY 1965 36 51 67 61 31.951
TURKEY 1970 51 22 67 61 35.743
URUGUAY 1955 8 4 57 48 2.372
URUGUAY 1960 12 1 57 48 2.538
URUGUAY 1965 16 14 57 48 2.693
URUGUAY 1970 21 19 57 48 2.808
URUGUAY 1975 24 45 57 48 2.829
VENEZUELA 1955 38 14 76 65 6.110
VENEZUELA 1960 209 23 76 65 7.632
VENEZUELA 1965 100 162 76 65 9.119
VENEZUELA 1970 9 27 76 65 10.709
VENEZUELA 1975 4 12 76 65 12.722
;
data newData;
set HW6;
sancts = log (sancts);
lprots = log (lprots);
lfrac = log (lfrac);
ineql = log (ineql);
pop = log (pop);
run;
The GLMSELECT procedure is one simple way of creating dummy variables.
There is a nice article about how to use it to generate dummy variables
data newData;
set HW6;
sancts = log (sancts);
lprots = log (lprots);
lfrac = log (lfrac);
ineql = log (ineql);
pop = log (pop);
Y = 0; *-- Create a fake response variable --*
run;
proc glmselect data=newData noprint outdesign(addinputvars)=want(drop=Y);
class country;
model Y = country / noint selection=none;
run;
If needed in further step, use the macro-variable &_GLSMOD created by the procedure that contains the names of the dummy variables.
The real question here is not related to SAS, it is related on how to get the region of a country by its name.
I would give a try to the ISO 3166 which lists all countries and their geographical location.
Getting that list is straight forward, then import that list in SAS, use a merge by country and finally identify the countries in Latin America
I am trying to parse and read from a .txt file, from the NOAA Weather site. How can I search the file to find certain text and insert it into the database?
I'm trying to search for Greenville and pull the conditions and temp. Then push that into my database, along with other cities? Any code that you are willing to share would be appreciated.
Code:
<cffile action="read" file="#expandPath("./NWS Raleigh Durham.txt")#" variable="myFile">"
<cfdump var="#myfile#">
Content:
National Weather Service Text Product Display Skip Navigation Regional Weather Roundup Issued by NWS Raleigh/Durham, NC Home | Current Version | Previous Version | Graphics & Text | Print | Product List | Glossary On Versions: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 000 ASUS42 KRAH 081410 RWRRAH NORTH CAROLINA WEATHER ROUNDUP NATIONAL WEATHER SERVICE RALEIGH NC 900 AM EST THU MAR 08 2018 NOTE: "FAIR" INDICATES FEW OR NO CLOUDS BELOW 12,000 FEET WITH NO SIGNIFICANT WEATHER AND/OR OBSTRUCTIONS TO VISIBILITY. NCZ001-053-055-056-065-067-081500- WESTERN NORTH CAROLINA CITY SKY/WX TMP DP RH WIND PRES REMARKS ASHEVILLE FAIR 35 23 61 VRB3 29.92F JEFFERSON FLURRIES 26 16 65 W9 29.83F WCI 16 MORGANTON FAIR 37 25 64 NW3 29.97F HICKORY CLOUDY 35 24 64 SW5 29.94F WCI 31 RUTHERFORDTON CLOUDY 37 27 67 W6 29.97S WCI 33 MOUNT AIRY FAIR 37 21 53 NW8 29.94F WCI 31 BOONE PTSUNNY 27 16 63 NW13G18 29.85F WCI 16 $$ NCZ021-022-025-041-071-084-088-081500- CENTRAL NORTH CAROLINA CITY SKY/WX TMP DP RH WIND PRES REMARKS CHARLOTTE CLOUDY 38 27 64 W5 29.97F WCI 34 GREENSBORO PTSUNNY 38 24 57 W8 29.93S WCI 32 WINSTON-SALEM FAIR 38 20 48 W8 29.94F WCI 32 RALEIGH-DURHAM PTSUNNY 36 26 67 CALM 29.96R FORT BRAGG CLOUDY 39 23 52 NW5 29.97R FAYETTEVILLE CLOUDY 38 28 67 W6 29.98R WCI 33 BURLINGTON CLOUDY 39 25 57 SW5 29.94S LAURINBURG CLOUDY 38 28 67 NW8 29.99R WCI 32 $$ NCZ011-015-027-028-043-044-047-080-103-081500- NORTHEASTERN NORTH CAROLINA CITY SKY/WX TMP DP RH WIND PRES REMARKS ROCKY MT-WILSO PTSUNNY 40 24 53 NW6 29.96R GREENVILLE FAIR 41 23 48 N6 29.97S WASHINGTON FAIR 41 25 51 NW9 29.94F ELIZABETH CITY PTSUNNY 40 27 59 NW7 29.92S MANTEO CLOUDY 42 28 58 N9 29.91S CAPE HATTERAS FAIR 45 33 63 N5 29.90S $$ NCZ078-087-090-091-093-098-101-081500- SOUTHEASTERN NORTH CAROLINA CITY SKY/WX TMP DP RH WIND PRES REMARKS LUMBERTON CLOUDY 40 29 64 NW8 29.99R WCI 34 GOLDSBORO CLOUDY 39 25 57 NW5 29.94S KINSTON PTSUNNY 43 25 49 NW8 29.96S KENANSVILLE CLOUDY 39 27 60 NW7 29.96S WCI 34 NEW BERN FAIR 41 27 57 NW8 29.95S CHERRY POINT NOT AVBL BEAUFORT FAIR 45 28 51 NW13 29.93S JACKSONVILLE CLOUDY 43 27 53 NW9 29.95S WILMINGTON FAIR 44 27 51 NW13 29.96S $$ National Weather Service Raleigh, NC Weather Forecast Office 1005 Capability Drive, Suite 300 Centennial Campus Raleigh, NC 27606-5226 (919) 326-1042 Page Author: RAH Webmaster Web Master's E-mail: rah.webmaster#noaa.gov Page last modified: Jan 10th, 2018 19:57 UTC Disclaimer Credits Glossary Privacy Policy About Us Career Opportunities
DATA OZONE;
INPUT MONTH $ STMF YKRS ##;
CARDS;
A 80 66 A 68 82 A 24 47 A 24 28 A 82 44 A 100 55
A 55 34 A 91 60 A 87 70 A 64 41 A . 67 A . 127 A 170 96 A . 56
JN 215 93 JN 230 106 JN . 49 JN 69 64 JN 98 83 JN 125 97
JN 72 51 JN 125 75 JN 143 104 JN 192 107 JN . 56 JN 122 68
JN 32 20 JN 23 35 JN 71 30 JN 38 31 JN 136 81 JN 169 119
JL 152 76 JL 201 108 JL 134 85 JL 206 96 JL 92 48 JL 101 60
JL 133 . JL 83 50 JL . 27 JL 60 37 JL 124 47 JL 142 71
JL 75 49 JL 103 59 JL . 53 JL 46 25 JL 68 45 JL . 78
S 38 23 S 80 50 S 80 34 S 99 58 S 71 35 S 42 24 S 52 27 S 33 17
;
run;
Proc Ttest data=Ozone PLOT=NONE ALPHA=0.01;
Where MONTH='JN';
Paired STMF*YKRS;
Run;
Question 2
Data Baseball;
Input ba league;
Datalines;
276 National League
288 National League
281 National League
290 National League
303 National League
257 American League
254 American League
263 American League
261 American League
Run;
Proc Ttest data=Baseball ALPHA=0.02 ;
Question 3
Proc Ttest data=ozone ALPHA=0.01 Plot=NONE;
Where Month='A'-'S';
Paired STMF*YKRS;
Run;
Question 2 Test to see if both leagues have different batting averages. Use a alpha = 0.02 in your conclusions and compute a 98% confidence interval for the means.
Question 3 From the first question test to see the differences the A and S average ozone values. Use alpha = 0.01 in your conclusions. Include a 99% confidence interval for the difference.
So my questions are for Question 2 kind of a stupid question but for whatever reason I am confused as to what you are suppose to do.
For question 3 (my main question) how do I use one proc ttest to check and see the differences between months A and S? I tried using a Where statement as you can see above, but of course that does not work I’m abit stomped on where to go from here. Also I omited a good bit of the month data in the ozone portion as I couldn't properly format all of the data without it looking extremely confusing.
Thanks for your help in advance!
I've encountered an interesting situation while calculating the inter-quartile range. Assuming we have a dataframe such as:
import pandas as pd
index=pd.date_range('2014 01 01',periods=10,freq='D')
data=pd.np.random.randint(0,100,(10,5))
data = pd.DataFrame(index=index,data=data)
data
Out[90]:
0 1 2 3 4
2014-01-01 33 31 82 3 26
2014-01-02 46 59 0 34 48
2014-01-03 71 2 56 67 54
2014-01-04 90 18 71 12 2
2014-01-05 71 53 5 56 65
2014-01-06 42 78 34 54 40
2014-01-07 80 5 76 12 90
2014-01-08 60 90 84 55 78
2014-01-09 33 11 66 90 8
2014-01-10 40 8 35 36 98
# test for q1 values (this works)
data.quantile(0.25)
Out[111]:
0 40.50
1 8.75
2 34.25
3 17.50
4 29.50
# break it by inserting row of nans
data.iloc[-1] = pd.np.NaN
data.quantile(0.25)
Out[115]:
0 42
1 11
2 34
3 12
4 26
The first quartile can be calculated by taking the median of values in the dataframe that fall below the overall median, so we can see what data.quantile(0.25) should have yielded. e.g.
med = data.median()
q1 = data[data<med].median()
q1
Out[119]:
0 37.5
1 8.0
2 19.5
3 12.0
4 17.0
It seems that quantile is failing to provide an appropriate representation of q1 etc. since it is not doing a good job of handling the NaN values (i.e. it works without NaNs, but not with NaNs).
I thought this may not be a "NaN" issue, rather it might be quantile failing to handle even-numbered data sets (i.e. where the median must be calculated as the mean of the two central numbers). However, after testing with dataframes with both even and odd-numbers of rows I saw that quantile handled these situations properly. The problem seems to arise only when NaN values are present in the dataframe.
I would like to use quntile to calculate the rolling q1/q3 values in my dataframe, however, this will not work with NaN's present. Can anyone provide a solution to this issue?
Internally, quantile uses numpy.percentile over the non-null values. When you change the last row of data to NaNs you're essentially left with an array array([ 33., 46., 71., 90., 71., 42., 80., 60., 33.]) in the first column
Calculating np.percentile(array([ 33., 46., 71., 90., 71., 42., 80., 60., 33.]) gives 42.
From the docstring:
Given a vector V of length N, the qth percentile of V is the qth ranked
value in a sorted copy of V. A weighted average of the two nearest
neighbors is used if the normalized ranking does not match q exactly.
The same as the median if q=50, the same as the minimum if q=0
and the same as the maximum if q=100.
Can anyone help me with this pratice question: 33 31 11 47 2 20 24 12 2 43. I am trying to figure out what the contents of the two output lists would be after the first pass of the Merge Sort.
The answer is supposedly:
List 1: 33 11 47 12
List 2: 31 2 20 24 2 43
Not making any sense to me sense I was under the impression that the first pass was where it divided it into two lists at the middle....
33 31 11 47 2 20 24 12
At first the list divides into individual elements such that when a single ton list is formed each element is compared with the one next to it. So after first pass we have
31 33 11 47 2 20 12 24
After that
11 31 33 47 2 12 20 24
and then
2 11 12 20 24 3133 37