Related
I have some problems with adpcm in .wav (sound)files.
at first of this question, I should say that I didn't read all things about ADPCM , just wanted to implement that fast and work on it ... (just training code)
I implemented it from MicroChip's pdf guid adpcm.(it's better to say copy/pasted and edited ,created class)
Test code:
const std::vector<int8_t> data = {
64, 67, 71, 75, 79, 83, 87, 91, 94, 98, 101, 104, 107, 110, 112,
115, 117, 119, 121, 123, 124, 125, 126, 126, 127, 127, 127, 126, 126, 125,
124, 123, 121, 119, 117, 115, 112, 110, 107, 104, 101, 98, 94, 91, 87,
83, 79, 75, 71, 67, 64, 60, 56, 52, 48, 44, 40, 36, 33, 29,
26, 23, 20, 17, 15, 12, 10, 8, 6, 4, 3, 2, 1, 1, 0,
0, 0, 1, 1, 2, 3, 4, 6, 8, 10, 12, 15, 17, 20, 23,
26, 29, 33, 36, 40, 44, 48, 52, 56, 60, 64};
void function() {
std::vector<uint8_t> en;
std::vector<uint8_t> de;
{ // encode
wave::ADPCM adpcm;
// 32768
for (size_t i{0}; i < data.size() - 3; i += 4) {
int16_t first{static_cast<int16_t>(
~((static_cast<uint16_t>(data[i]) & 0xff) |
((static_cast<uint16_t>(data[i + 1]) << 8) & 0xff00)) +
1)};
int16_t second{static_cast<int16_t>(
~((static_cast<uint16_t>(data[i + 2]) & 0xff) |
((static_cast<uint16_t>(data[i + 3]) << 8) & 0xff00)) +
1)};
en.push_back(static_cast<uint8_t>((adpcm.ADPCMEncoder(first) & 0x0f) |
(adpcm.ADPCMEncoder(second) << 4)));
}
}
{ // decode
wave::ADPCM adpcm;
for (auto val : en) {
int16_t result = ~adpcm.ADPCMDecoder(val & 0xf) + 1;
int8_t temp0 = ((result)&0xff);
int8_t temp1 = ((result)&0xff00) >> 8;
de.push_back(temp0);
de.push_back(temp1);
result = ~adpcm.ADPCMDecoder(val >> 4) + 1;
temp0 = ((result)&0xff);
temp1 = (result & 0xff00) >> 8;
de.push_back(temp0);
de.push_back(temp1);
}
}
int i{0};
for (auto val : de) {
qDebug() << "real:" << data[i] << "decoded: " << val;
i++;
}
}
I'm sure that my class and encode/decode is right ,just after decode I should do somethings to show correct numbers(but I donw know which casting is failed).
why I'm sure? because when I see my output in QDebug , every other sample(after decode) are correct (with few errors, in big datas errors will be smaller than now), but anothers are failed
my output:
real: 26 decoded: 6
real: 29 decoded: 32
real: 33 decoded: 5
real: 36 decoded: 48
real: 40 decoded: 6
real: 44 decoded: 32
real: 48 decoded: 5
real: 52 decoded: 48
real: 56 decoded: 4
real: 60 decoded: 64
data are 8bits in device
Ok , I found my answer
when you have error on any number , your error is in lower bits!
so my predict was on two numbers that was anded to gether , then that number is in lower bits position have much errors!
This is for my intro to C++ course. We are currently doing arrays and I'm trying to find the min value for each column of the array. here is what I have:
#include <iostream>
using namespace std;
int main(){
int grade[4][30] = {{76, 70, 80, 90, 100, 83, 61, 63, 64, 65, 97, 69, 70, 79,60, 70, 80, 90, 100, 83, 61, 63, 99, 98, 66, 69, 70, 79},
{74, 70, 80, 90,60, 61, 93, 88, 73, 65, 91, 69, 70, 79, 60, 70, 80, 90, 60, 83, 61, 63, 64, 65, 66, 69, 67, 74},
{72, 70, 80, 90, 99, 84, 62, 63, 99, 65, 66, 69, 70, 79, 60, 70, 80, 90, 99, 83, 61, 63, 64, 65, 66, 69, 70, 77},
{69, 70, 80, 90, 60, 61, 86, 63, 97, 97, 66, 69, 70, 79, 97, 70, 80, 90, 88, 83, 88, 63, 64, 65, 66, 69, 70, 79}};
int a;
for(int x = 0; x < 4; ++x){
a = grade[x][0];
for(int y = 0; y < 30; ++y){
if( a > grade[x][y])
a = grade[x][y];
cout << "a is " << a << " for the " << y << "time" << endl;}
cout << a << endl;}
return 0;
}
My problem is I don't understand why in the last two loops the value turns to 0? The real answer should be 60 for each row.
P.S I used this to find the maximum and it worked, but don't get why it won't work here?
for(int y = 0; y < 30; ++y){
It is because for example your first array contains only 28 explicitly initialized elements and you iterate till 30 (see above). The elements which you didn't initialize yourself are initialized to 0.
Your array initializers have less than 30 numbers. Since your array is declared to take 30 elements, the remaining entries are set to 0.
Since you don't appear to have 0s in your data, you could use 0 as a sentinel to know to stop the loop.
How have/would you design an function that on each call returns the next value in a nominated numeric range in lexicographical order of string representation...?
Example: range 8..203 --> 10, 100..109, 11, 110..119, 12, 120..129, 13, 130..139, ..., 19, 190..199, 20, 200..203, 30..99.
Constraints: indices 0..~INT_MAX, fixed space, O(range-length) performance, preferably "lazy" so if you stop iterating mid way you haven't wasted processing effort. Please don't post brute force "solutions" iterating numerically while generating strings that are then sorted.
Utility: if you're generating data that ultimately needs to be lexicographically presented or processed, a lexicographical series promises lazy generation as needed, reduces memory requirements and eliminates a sort.
Background: when answering this question today, my solution gave output in numeric order (i.e. 8, 9, 10, 11, 12), not lexicographical order (10, 11, 12, 8, 8) as illustrated in the question. I imagined it would be easy to write or find a solution, but my Google-foo let me down and it was trickier than I expected, so I figured I'd collect/contribute here....
(Tagged C++ as it's my main language and I'm personally particularly interested in C++ solutions, but anything's welcome)
Somebody voted to close this because I either didn't demonstrate a minimal understanding of the problem being solved (hmmmm!?! ;-P), or an attempted solution. My solution is posted as an answer as I'm happy for it to be commented on and regailed in the brutal winds of Stack Overflow wisdom.... O_o
This is actually quite easy. First an observation:
Theorem: if two numbers x and y such that x < y are in the series and these numbers have the same number of digits, then x comes before y.
Proof: let's view digits of x as xn..x0 and digits of y as yn...y0. Let's take the left most digit that these two differ in, assumed to be at index i. Therefore, we have:
y = yn...yiy(i-1)...y0
x = yn...yix(i-1)...x0
since all digits from n to i are the same in both numbers. If x < y, then mathematically:
x(i-1) < y(i-1)
Lexicographically, if the digit x(i-1) is smaller than the digit y(i-1), then x comes before y.
This theorem means that in your specified range of [a, b], you have numbers with different number of digits, but the ones that have the same number of digits are in their mathematical order.
Building on that, here's a simple algorithm. First, let's say a has m digits and b has n digits (n >= m)
1. create a heap with lexicographical order
2. initially, insert `a` and `10^i` for i in [n + 1, m]
3. while the heap is not exhausted
3.1. remove and yield the top of the heap (`next`) as next result
3.2. if `next + 1` is still in range `[a, b]` (and doesn't increase in digits), insert it in heap
Notes:
In step 2, you are inserting the starting numbers of each series of numbers that have the same number of digits.
To change to a function that returns a number on each call, step 3.1 should be changed to store the state of the algorithm and resume on next call. Pretty standard.
Step 3.2 is the part that exploits the above theorem and keeps only the next number in mathematical order in the heap.
Assuming N = b - a, The extra space used by this algorithm is O(log N) and it's time complexity is O(N * log log N).
Here's my attempt, in Python:
import math
#iterates through all numbers between start and end, that start with `cur`'s digits
def lex(start, end, cur=0):
if cur > end:
return
if cur >= start:
yield cur
for i in range(0,10):
#add 0-9 to the right of the current number
next_cur = cur * 10 + i
if next_cur == 0:
#we already yielded 0, no need to do it again
continue
for ret in lex(start, end, next_cur):
yield ret
print list(lex(8, 203))
Result:
[10, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 11, 110, 111, 112, 113,
114, 115, 116, 117, 118, 119, 12, 120, 121, 122, 123, 124, 125, 126, 127, 128,
129, 13, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 14, 140, 141, 142,
143, 144, 145, 146, 147, 148, 149, 15, 150, 151, 152, 153, 154, 155, 156, 157,
158, 159, 16, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 17, 170, 171,
172, 173, 174, 175, 176, 177, 178, 179, 18, 180, 181, 182, 183, 184, 185, 186,
187, 188, 189, 19, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 20, 200,
201, 202, 203, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36,
37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56,
57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76,
77, 78, 79, 8, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 9, 90, 91, 92, 93, 94, 95,
96, 97, 98, 99]
This uses O(log(end)) stack space, which is bounded by INT_MAX, so it won't go any deeper than five calls for your typical 16 bit int. It runs in O(end) time, since it has to iterate through numbers smaller than start before it can begin yielding valid numbers. This can be considerably worse than O(end-start) if start and end are large and close together.
Iterating through lex(0, 1000000) takes about six seconds on my machine, so it appears to be slower than Tony's method but faster than Shahbaz's. Of course, it's challenging to make a direct comparison since I'm using a different language.
This is a bit of a mess, so I'm curious to see how other people tackle it. There are so many edge cases explicitly handled in the increment operator!
For range low to high:
0 is followed by 1
numbers shorter than high are always followed by 0-appended versions (e.g. 12->120)
numbers other than high that end in 0-8 are followed by the next integer
when low has as many digits as high, you finish after high (return sentinel high + 1)
otherwise you finish at a number 999... with one less digit than high
other numbers ending in 9(s) have the part before the trailing 9s incremented, but if that results in trailing 0s they're removed providing the number's still more than low
template <typename T>
std::string str(const T& t)
{
std::ostringstream oss; oss << t; return oss.str();
}
template <typename T>
class Lex_Counter
{
public:
typedef T value_type;
Lex_Counter(T from, T to, T first = -1)
: from_(from), to_(to),
min_size_(str(from).size()), max_size_(str(to).size()),
n_(first != -1 ? first : get_first()),
max_unit_(pow(10, max_size_ - 1)), min_unit_(pow(10, min_size_ - 1))
{ }
operator T() { return n_; }
T& operator++()
{
if (n_ == 0)
return n_ = 1;
if (n_ < max_unit_ && n_ * 10 <= to_)
return n_ = n_ * 10; // e.g. 10 -> 100, 89 -> 890
if (n_ % 10 < 9 && n_ + 1 <= to_)
return ++n_; // e.g. 108 -> 109
if (min_size_ == max_size_
? n_ == to_
: (n_ == max_unit_ - 1 && to_ < 10 * max_unit_ - 10 || // 99/989
n_ == to_ && to_ >= 10 * max_unit_ - 10)) // eg. 993
return n_ = to_ + 1;
// increment the right-most non-9 digit
// note: all-9s case handled above (n_ == max_unit_ - 1 etc.)
// e.g. 109 -> 11, 19 -> 2, 239999->24, 2999->3
// comments below explain 230099 -> 230100
// search from the right until we have exactly non-9 digit
for (int k = 100; ; k *= 10)
if (n_ % k != k - 1)
{
int l = k / 10; // n_ 230099, k 1000, l 100
int r = ((n_ / l) + 1) * l; // 230100
if (r > to_ && r / 10 < from_)
return n_ = from_; // e.g. from_ 8, r 20...
while (r / 10 >= from_ && r % 10 == 0)
r /= 10; // e.g. 230100 -> 2301
return n_ = r <= from_ ? from_ : r;
}
assert(false);
}
private:
T get_first() const
{
if (min_size_ == max_size_ ||
from_ / min_unit_ < 2 && from_ % min_unit_ == 0)
return from_;
// can "fall" from e.g. 321 to 1000
return min_unit_ * 10;
}
T pow(T n, int exp)
{ return exp == 0 ? 1 : exp == 1 ? n : 10 * pow(n, exp - 1); }
T from_, to_;
size_t min_size_, max_size_;
T n_;
T max_unit_, min_unit_;
};
Performance numbers
I can count from 0 to 1 billion in under a second on a standard Intel machine / single threaded, MS compiler at -O2.
The same machine / harness running my attempt at Shahbaz's solution - below - takes over 3.5 second to count to 100,000. Maybe the std::set isn't a good heap/heap-substitute, or there's a better way to use it? Any optimisation suggestions welcome.
template <typename T>
struct Shahbaz
{
std::set<std::string> s;
Shahbaz(T from, T to)
: to_(to)
{
s.insert(str(from));
for (int n = 10; n < to_; n *= 10)
if (n > from) s.insert(str(n));
n_ = atoi(s.begin()->c_str());
}
operator T() const { return n_; }
Shahbaz& operator++()
{
if (s.empty())
n_ = to_ + 1;
else
{
s.erase(s.begin());
if (n_ + 1 <= to_)
{
s.insert(str(n_ + 1));
n_ = atoi(s.begin()->c_str());
}
}
return *this;
}
private:
T n_, to_;
};
Perf code for reference...
void perf()
{
DWORD start = GetTickCount();
int to = 1000 *1000;
// Lex_Counter<int> counter(0, to);
Shahbaz<int> counter(0, to);
while (counter <= to)
++counter;
DWORD elapsed = GetTickCount() - start;
std::cout << '~' << elapsed << "ms\n";
}
Some Java code (deriving C++ code from this should be trivial), very similar to Kevin's Python solution:
public static void generateLexicographical(int lower, int upper)
{
for (int i = 1; i < 10; i++)
generateLexicographical(lower, upper, i);
}
private static void generateLexicographical(int lower, int upper, int current)
{
if (lower <= current && current <= upper)
System.out.println(current);
if (current > upper)
return;
for (int i = 0; i < 10; i++)
generateLexicographical(lower, upper, 10*current + i);
}
public static void main(String[] args)
{
generateLexicographical(11, 1001);
}
The order of the if-statements are not important, and one can be made an else of the other, but changing them in any way strangely enough makes it take about 20% longer.
This just starts with each number from 1 to 10, then recursively appends each possible number from 0 to 10 to that number, until we get a number bigger than the upper limit.
It similarly uses O(log upper) space (every digit requires a stack frame) and O(upper) time (we go from 1 to upper).
I/O is obviously the most time-consuming part here. If that is removed and replaced by just incrementing a variable, generateLexicographical(0, 100_000_000); takes about 4 seconds, but by no means taken from a proper benchmark.
all! I'm running into some difficulty with a school project in which we're implementing the Serpent cipher. The problem is in the function
setKey(unsigned char (&user_key[32]))
I would like to pass a byte array of size 32 to my function, and then have my function do all manner of stuff to the values of that array. My code as I've printed it will not compile, as I get the error -
no known conversion for argument 1 from ‘unsigned char (*)[32]’ to
‘unsigned char (&) [32]’
I've looked through a number of similar posts, and none of the solutions I've found seem to lead to my code compiling, or if it compiles, doing what I'd like it to. Unfortunately, I can't remember how I last got my code to compile, but when I did, printing out the values of user_key[] within the function setKey() gave me the following output:
19, 0, 0, 0, 51, 0, 0, 0, 83, 0, 0, 0, 115, 0, 0, 0, 20, 0, 0, 0, 52, 0, 0, 0,
84, 0, 0, 0, 116, 0, 0, 0
In addition, when it was running, it was exiting with a segfault after printing out the line
"This is the end. My only friend, the end." as well as a "stack smashing detected" warning. In short, my code seems to be all sorts of wrong. Any help would be greatly appreciated!!
#include <iostream>
#include <math.h>
using namespace std;
class KeySchedule{
int ip[64];
int key_size;
unsigned long long int k0;
unsigned long long int k1;
unsigned long long int k2;
unsigned long long int k3;
public:
KeySchedule(){
/*The initial permutation. To be applied to the plaintext and keys.
ip = {0, 32, 64, 96, 1, 33, 65, 97, 2, 34, 66, 98, 3, 35, 67, 99,
4, 36, 68, 100, 5, 37, 69, 101, 6, 38, 70, 102, 7, 39, 71, 103,
8, 40, 72, 104, 9, 41, 73, 105, 10, 42, 74, 106, 11, 43, 75, 107,
12, 44, 76, 108, 13, 45, 77, 109, 14, 46, 78, 110, 15, 47, 79, 111,
16, 48, 80, 112, 17, 49, 81, 113, 18, 50, 82, 114, 19, 51, 83, 115,
20, 52, 84, 116, 21, 53, 85, 117, 22, 54, 86, 118, 23, 55, 87, 119,
24, 56, 88, 120, 25, 57, 89, 121, 26, 58, 90, 122, 27, 59, 91, 123,
28, 60, 92, 124, 29, 61, 93, 125, 30, 62, 94, 126, 31, 63, 95, 127};
*/
for (int i = 0; i < 127; i++ ){
ip[i] = (32*i) % 127;
}
ip[127] = 127;
k3 = 0;
k2 = 0;
k1 = 0;
k0 = 0;
key_size = -1;
}
void setKey (unsigned char (&user_key)[32]){
for (int i = 0; i<32; i++){
cout << (int)(user_key)[i] << endl;
}
for (int i = 0; i<8; i++){
k3 ^= (int)(user_key)[i] << (8-i);
k2 ^= (int)(user_key)[i+8] << (8-i);
k1 ^= (int)(user_key)[i+16] << (8-i);
k0 ^= (int)(user_key)[i+24] << (8-i);
}
}
};
int main(){
unsigned char testkey[]= {0x01, 0x02, 0x03, 0x04,
0x05, 0x06, 0x07, 0x08,
0x09, 0x0a, 0x0b, 0x0c,
0x0d, 0x0e, 0x0f, 0x10,
0x20, 0x30, 0x40, 0x50,
0x60, 0x70, 0x80, 0x90,
0xa0, 0xb0, 0xc0, 0xd0,
0xe0, 0xf0, 0x00, 0xff};
cout << "The size of testkey is: "
<< sizeof(testkey)/sizeof(*testkey) << endl;
KeySchedule ks = KeySchedule();
ks.setKey(&testkey);
cout << "This is the end. My only friend, the end." << endl;
};
You are passing the address your unsigned char array, i.e. a pointer to unsigned char[32]. You need this:
ks.setKey(testkey);
how i create an array of arrays, from One-dimensional array of Series, example:
I have an array like :
long int arr[20] = {23, 91, -71, -63, 22, 55, 51, 73, 17, -19,-65, 44, 95, 66, 82, 85, 97, 30, 54, -34};
and i want to create array of arrays in ascending order like: (in c++)
23, 91
-71, -63, 22, 55
51, 73
17
-19
-65, 44, 95
66, 82, 85, 97
30, 54
-34
already tried to now how many array there are
int sum=0;
for(int i=0;i<n-1;i++)
if(arr[i]>arr[i+1])sum++;
return sum;
int sum=0;
for(int i=0;i<n-1;i++)
if(arr[i]>arr[i+1])sum++;
return sum;
should be
int sum = 0;
if (n > 0)
{
for (int i = 0; i < n - 1; i++)
if (arr[i] > arr[i+1])
sum++;
sum++;
}
Your version doesn't count the last sequence of ascending numbers.
That's a start, what you have to do next is allocate enough memory for a pointer to each row. Then you go through the numbers again, count the length of each row, allocate the memory for that row, and then go through that row again, copying the numbers for that row. It's just loops (inside loops). Have a go and post the code if you get stuck.
How about creating vector of vector instead of array of array? In array you have to determine size which will cause you, either index out of bound exception or huge space lost. If you use vector, you are not going to determine size of vector, it will allocate more space as you fill the size of vector.
If your initial array is descending order, your double array size will be nx1, if it is ascending order, then 1xn so you have to make your double array nxn not to give exception which is inacceptible when n > 10^4 (approximately).
Some basic syntax of vector of vector is as follows:
vector<vector<int>> myvect; //initialization
myvect.at(i).at(j) = x; //reaching i_th row, j_th col element
myvect.at(0).push_back(1); //add element to the end of the row0.
This website looks nice about explaining vectors.
Here is sample code, I didn't test it so there might be small syntax erra,
vector<vector<int>> myvect; //initialization
int size = 20;
long int arr[size] = {23, 91, -71, -63, 22, 55, 51, 73, 17, -19,-65, 44, 95, 66, 82, 85, 97, 30, 54, -34};
int row = -1; int val = arr[0]+1;
for(int i = 0; i < size; i++){
if(arr[i] < val){
row++;
myvect.push_back(vector<int> () );
}
myvect.at(row).push_back(arr[i]);
}
The code is basically like this.
You could do
array[9][4] = {
{ 23, 91 },
{ -71, -63, 22, 55 },
{ 51, 73 },
{ 17 },
{ -19 },
{ -65, 44, 95 },
{ 66, 82, 85, 97 },
{ 30, 54 }
{ -34 }
};