I'd like to build Oreo for my Z5 compact (suzuran, e5823). I started the instructions of Sony on their dev site.
Contrary to 7.0 and 6.0 at the time, when I'm prompted for the configuration, I cannot see my device.
Lunch menu... pick a combo:
1. aosp_arm-eng
2. aosp_arm64-eng
3. aosp_mips-eng
4. aosp_mips64-eng
5. aosp_x86-eng
6. aosp_x86_64-eng
7. full_fugu-userdebug
8. aosp_fugu-userdebug
9. car_emu_arm64-userdebug
10. car_emu_arm-userdebug
11. car_emu_x86_64-userdebug
12. car_emu_x86-userdebug
13. mini_emulator_arm64-userdebug
14. m_e_arm-userdebug
15. m_e_mips64-eng
16. m_e_mips-userdebug
17. mini_emulator_x86_64-userdebug
18. mini_emulator_x86-userdebug
19. aosp_dragon-userdebug
20. aosp_dragon-eng
21. aosp_marlin-userdebug
22. aosp_marlin_svelte-userdebug
23. aosp_sailfish-userdebug
24. aosp_angler-userdebug
25. aosp_bullhead-userdebug
26. aosp_bullhead_svelte-userdebug
27. hikey-userdebug
28. aosp_f8131-userdebug
29. aosp_f8132-userdebug
30. aosp_f8331-userdebug
31. aosp_f8332-userdebug
32. aosp_g8231-userdebug
33. aosp_g8232-userdebug
34. aosp_f5321-userdebug
35. aosp_g8441-userdebug
36. aosp_g8141-userdebug
37. aosp_g8142-userdebug
38. aosp_g8341-userdebug
39. aosp_g8342-userdebug
40. aosp_f5121-userdebug
41. aosp_f5122-userdebug
42. aosp_e2303-userdebug
43. aosp_e2333-userdebug
Which would you like? [aosp_arm-eng]
I'm not sure if I can pick aosp_arm-eng, download binaries from sony for 7.0.1 and pray for the best.
I am to wait that they release a specific guide, or update their repos?
Related
I wrote an R script to randomly assign participants for and RCT. I used set.seed() to ensure I would have reproducible results.
I now want to document what I have done in an R markdown document and confusingly I don't get the same results, despite using the same seed.
Here is the code chunk:
knitr::opts_chunk$set(cache = T)
set.seed(4321)
Group <- sample(1:3, 5, replace=TRUE)
couple.df <- data.frame(couple.id=1:5,
partner1=paste0("FRS0", c(35, 36, 41, 50, 61)),
partner2=paste0("FRS0", c(38, 37, 42, 51, 62)),
Group)
print(couple.df)
And here is the output I get when running it as a chunk:
couple.id
<int>
partner1
<chr>
partner2
<chr>
Group
<int>
1 FRS035 FRS038 2
2 FRS036 FRS037 3
3 FRS041 FRS042 2
4 FRS050 FRS051 1
5 FRS061 FRS062 3
(not sure how to get this to format)
This is the same as I had when I wrote the original code as an R script.
However, when I knit the markdown file I get the following output in my html document (sorry again about the formatting - I have just copied and pasted from the html document, adding in the ticks to format it as code, pointers on how to do this properly would also be welcome)
knitr::opts_chunk$set(cache = T)
set.seed(4321)
Group <- sample(1:3, 5, replace=TRUE)
couple.df <- data.frame(couple.id=1:5,
partner1=paste0("FRS0", c(35, 36, 41, 50, 61)),
partner2=paste0("FRS0", c(38, 37, 42, 51, 62)),
Group)
print(couple.df)
## couple.id partner1 partner2 Group
## 1 1 FRS035 FRS038 1
## 2 2 FRS036 FRS037 2
## 3 3 FRS041 FRS042 3
## 4 4 FRS050 FRS051 2
## 5 5 FRS061 FRS062 1
That is, they are different. What is going on here and how can I get the markdown document to give the same results? I am committed to using the allocation I arrived at using the original script.
I am new to profiling. I am trying to profile my PHP with xdebug.
The cachegrind file is created but has no significant content
I have set xdebug.profiler_enable_trigger = 1
xdebug.profiler_output_name = cachegrind+%p+%H+%R.cg
I call my page with additional GET parameter ?XDEBUG_PROFILE=1
My cachegrind file is generated but has no significant content
Here is my output:
version: 1
creator: xdebug 2.7.0alpha1 (PHP 7.0.30-dev)
cmd: C:\WPNserver\www\DMResources\Classes\VendorClasses\PHPMySQLiDatabase\MysqliDb.php
part: 1
positions: line
events: Time Memory
fl=(1)
fn=(221) php::mysqli->close
1244 103 -14832
fl=(42)
fn=(222) MysqliDbExt->__destruct
1239 56 0
cfl=(1)
cfn=(221)
calls=1 0 0
1244 103 -14832
That's it - I must be missing something fundamental.
I think you hit this bug in xdebug.
As suggested by Derick in the issue tracker, you can workaround this by adding %r to the profiler output name. eg: xdebug.profiler_output_name = cachegrind+%p+%H+%R+%r.cg
(with %r adding a random number to the name)
For a programming class, I have to convert a range of values to a switch statement without using if/else ifs. Here are the values that I need to convert to cases:
0 to 149 ............. $10.00
150 to 299 .........$15.00
300 to 449 .........$25.00
550 to 749..........$40.00
750 to 1199........$65.00
2000 and above.....$85.00
I am having difficulty finding a way to separate the values since they are so close in number (like 149 to 150).
I have used plenty of algorithms such as dividing the input by 2000, and then multiplying that by 10 to get a whole number, but they are too close to each other to create a new case for.
The first thing to do is to figure out your granularity. It looks like in your case you do not deal with increments less than 50.
Next, convert each range to a range of integers resulting from dividing the number by the increment (i.e. 50). In your case, this would mean
0, 1, 2 --> 10
3, 4, 5 --> 15
6, 7, 8 --> 25
... // And so on
This maps to a very straightforward switch statement.
Note: This also maps to an array of values, like this:
10, 10, 10, 15, 15, 15, 25, 25, 25, ...
Now you can get the result by doing array[n/50].
I'm not caring for efficiency right now. I'm just looking for a decent way I can generate random numbers from 0 through 1 billion. I've tried doing rand() * rand() but it's been giving me only numbers estimating greater than about 10 million. I would like the range to be way more spread out. Anyone have any suggestions?
Sure, just use the modern <random> facilities of C++:
std::random_device rd;
std::mt19937 gen(rd());
std::uniform_int_distribution<> dis(1, 1000000000);
for (int n=0; n<10; ++n)
std::cout << dis(gen) << ' ';
std::cout << '\n';
(from here, slightly modified to do what OP needs) will do what you need.
An analog function for floating point values also exists if needed.
Remark: In the unlikely case that your platform's int cannot hold one billion, or if you need even bigger numbers, you can also use bigger integer types like this:
std::uniform_int_distribution<std::int64_t> dis(1, 1000000000);
Also note that seeding the mt as presented here is not optimal; see my question here for more information.
One billion is just below 2^30. If you can't generate a 30 bit number directly, then generate two 15-bit numbers, shift one left by 15 bits and XOR with the unshifted number to get a 30-bit number.
If the 30-bit result exceeds 1 billion, then throw it away and generate another 30-bit number. 2^30 = 1073741824, so the result will only be too large in about 7% of cases.
RANDOMIZED SERIAL/SEQUENTIAL NUMBERS (UNIQUE & UNPREDICTABLE)
If only the random numbers are allowed to be of unique value.
12345678900 72 12345678901 34. 12345678926 34. 12345678951 24.
12345678976 84. 12345678902 65. 12345678927 63. 12345678952 51.
12345678977 67. 12345678903 09. 12345678928 11. 12345678953 19.
12345678978 53. 12345678904 22. 12345678929 44. 12345678954 78.
12345678979 04. 12345678905 21. 12345678930 85. 12345678955 76.
12345678980 35. 12345678906 37. 12345678931 01. 12345678956 31.
12345678981 73. 12345678907 42. 12345678932 55. 12345678957 12.
12345678982 16. 12345678908 20. 12345678933 95. 12345678958 87.
12345678983 77. 12345678909 71. 12345678934 49. 12345678959 83.
12345678984 13. 12345678910 32. 12345678935 60. 12345678960 50.
12345678985 45. 12345678911 58. 12345678936 86. 12345678961 02.
12345678986 61. 12345678912 66. 12345678937 30. 12345678962 64.
12345678987 23. 12345678913 10. 12345678938 48. 12345678963 94.
12345678988 40. 12345678914 79. 12345678939 89. 12345678964 27.
12345678989 70. 12345678915 93. 12345678940 43. 12345678965 92.
12345678990 08. 12345678916 46. 12345678941 72. 12345678966 03.
12345678991 88. 12345678917 57. 12345678942 14. 12345678967 47.
12345678992 65. 12345678918 52. 12345678943 38 12345678968 62.
12345678993 17. 12345678919 15. 12345678944 75. 12345678969 80.
12345678994 54. 12345678920 41. 12345678945 07. 12345678970 18.
12345678995 28. 12345678921 62. 12345678946 25. 12345678971 58.
12345678996 74. 12345678922 26. 12345678947 69. 12345678972 43.
12345678997 29. 12345678923 91. 12345678948 82. 12345678973 59.
12345678998 33. 12345678924 05. 12345678949 56. 12345678974 81.
12345678999 78. 12345678925 36. 12345678950 68. 12345678975 90.
12345679000 06.
These are 101 unique random numbers.
Each number consists of 13 digits, out of which first 11 digits are sequential numbers and the 12th and 13th digits together form a random number.
These last two digits transform the 11 digit sequential number into a 13 digit random number. Thus when a sequential number is transformed into a random number by addition of 1 or 2 digits, such randomization does not need math based algorithm.
Even if the two digits are created by math based algorithms, there can be innumerable such algorithms that can create two digit random numbers.
Hence, my claim is that when 1, 2 or 3 randomly created digits are attached to the sequential number, you award randomness to it and such randomized sequential numbers are unpredictable.
Thus a SHORTEST POSSIBLE sequence of 11 digits can accommodate one billion unpredictable random numbers and a sequence of only 14 digits can accommodate one trillion unpredictable random numbers.
For reference, this is all on a Windows 7 x64 bit machine in PyCharm Educational Edition 1.0.1, with Python 3.4.2 and Pandas 0.16.1
I have an ~791MB .csv file with ~3.04 million rows x 24 columns. The file contains liquor sales data for the state of Iowa from January 2014 to February 2015. If you are interested, the file can be found here: https://data.iowa.gov/Economy/Iowa-Liquor-Sales/m3tr-qhgy.
One of the columns, titled store location, holds the address including latitude and longitude. The purpose of the program below is to take the latitude and longitude out of the store location cell and place each in its own cell. When the file is cut down to ~1.04 million rows, my program works properly.
1 import pandas as pd
2
3 #import the original file
4 sales = pd.read_csv('Iowa_Liquor_Sales.csv', header=0)
5
6 #transfer the copies into lists
7 lat = sales['STORE LOCATION']
8 lon = sales['STORE LOCATION']
9
10 #separate the latitude and longitude from each cell into their own list
11 hold = [i.split('(', 1)[1] for i in lat]
12 lat2 = [i.split(',', 1)[0] for i in hold]
13 lon2 = [i.split(',', 1)[1] for i in hold]
14 lon2 = [i.split(')', 1)[0] for i in lon2]
15
16 #put the now separate latitude and longitude back into their own columns
17 sales['LATITUDE'] = lat2
18 sales['LONGITUDE'] = lon2
19
20 #drop the store location column
21 sales = sales.drop(['STORE LOCATION'], axis=1)
22
23 #export the new panda data frame into a new file
24 sales.to_csv('liquor_data2.csv')
However, when I try to run the code with the full 3.04 million line file, it gives me this error:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Python34\lib\site-packages\pandas\core\generic.py", line 1595, in drop
dropped = self.reindex(**{axis_name: new_axis})
File "C:\Python34\lib\site-packages\pandas\core\frame.py", line 2505, in reindex
**kwargs)
File "C:\Python34\lib\site-packages\pandas\core\generic.py", line 1751, in reindex
self._consolidate_inplace()
File "C:\Python34\lib\site-packages\pandas\core\generic.py", line 2132, in _consolidate_inplace
self._data = self._protect_consolidate(f)
File "C:\Python34\lib\site-packages\pandas\core\generic.py", line 2125, in _protect_consolidate
result = f()
File "C:\Python34\lib\site-packages\pandas\core\generic.py", line 2131, in <lambda>
f = lambda: self._data.consolidate()
File "C:\Python34\lib\site-packages\pandas\core\internals.py", line 2833, in consolidate
bm._consolidate_inplace()
File "C:\Python34\lib\site-packages\pandas\core\internals.py", line 2838, in _consolidate_inplace
self.blocks = tuple(_consolidate(self.blocks))
File "C:\Python34\lib\site-packages\pandas\core\internals.py", line 3817, in _consolidate
_can_consolidate=_can_consolidate)
File "C:\Python34\lib\site-packages\pandas\core\internals.py", line 3840, in _merge_blocks
new_values = _vstack([b.values for b in blocks], dtype)
File "C:\Python34\lib\site-packages\pandas\core\internals.py", line 3870, in _vstack
return np.vstack(to_stack)
File "C:\Python34\lib\site-packages\numpy\core\shape_base.py", line 228, in vstack
return _nx.concatenate([atleast_2d(_m) for _m in tup], 0)
MemoryError
I tried running the code line-by-line in the python console and found that the error occurs after the program runs the sales = sales.drop(['STORE LOCATION'], axis=1) line.
I have searched for similar issues elsewhere and the only answer I have come up with is chunking the file as it is read by the program, like this:
#import the original file
df = pd.read_csv('Iowa_Liquor_Sales7.csv', header=0, chunksize=chunksize)
sales = pd.concat(df, ignore_index=True)
My only problem with that is then I get this error:
Traceback (most recent call last):
File "C:/Users/Aaron/PycharmProjects/DATA/Liquor_Reasign_Pd.py", line 14, in <module>
lat = sales['STORE LOCATION']
TypeError: 'TextFileReader' object is not subscriptable
My google-foo is all foo'd out. Anyone know what to do?
UPDATE
I should specify that with the chunking method,the error comes about when the program tries to duplicate the store location column.
So I found an answer to my issue. I ran the program in python 2.7 instead of python 3.4. The only change I made was deleting line 8, as it is unused. I don't know if 2.7 just handles the memory issue differently, or if I had improperly installed the pandas package in 3.4. I will reinstall pandas in 3.4 to see if that was the problem, but if anyone else has a similar issue, try your program in 2.7.
UPDATE Realized that I was running 32 bit python on a 64 bit machine. I upgraded my versions of python and it runs without memory errors now.