I am trying to understand the basics of image/video processing and recently I learned that all the processing should be applied to a linear representation of an image. So, I wrote two functions for sRGB -> RGB and RGB -> sRGB:
void sRGB2lRGB(std::vector<unsigned char>& in, std::vector<unsigned char>& out)
{
std::vector<double> temp(in.begin(), in.end());
for (int i = 0; i < temp.size(); i++)
{
temp[i] /= 255.0;
}
for (int i = 0; i < temp.size(); i++)
{
if (temp[i] <= 0.04045)
{
temp[i] /= 12.92;
}
else
{
temp[i] = std::pow((temp[i] + 0.055) / 1.055, 2.4);
}
}
for (int i = 0; i < temp.size(); i++)
{
out[i] = temp[i] * (int)255 + 0.5;
}
}
void lRGB2sRGB(std::vector<unsigned char>& in, std::vector<unsigned char>& out)
{
std::vector<double> temp(in.begin(), in.end());
for (int i = 0; i < temp.size(); i++)
{
temp[i] /= 255.0;
}
for (int i = 0; i < temp.size(); i++)
{
if (temp[i] <= 0.0031308)
{
temp[i] *= 12.92;
}
else
{
temp[i] = 1.055 * std::pow(temp[i], 1.0 / 2.4) - 0.055;
}
}
for (int i = 0; i < temp.size(); i++)
{
out[i] = temp[i] * (int)255 + 0.5;
}
}
To test it I tried this:
int main()
{
std::vector<unsigned char> in(255);
std::vector<unsigned char> out(255);
std::vector<unsigned char> out2(255);
for (int i = 0; i < 255; i++)
{
in[i] = i;
}
sRGB2lRGB(in, out);
lRGB2sRGB(out, out2);
for (int i = 0; i < 255; i++)
{
if (out2[i] != i)
{
std::cout << "was: " << (int)in[i] << ", now: " << (int)out2[i] << '\n';
}
}
}
But it appeared that the closer the value to 0, the more inaccurate the result would be. The output is:
was: 1, now: 0
was: 2, now: 0
was: 3, now: 0
was: 4, now: 0
was: 5, now: 0
was: 6, now: 0
was: 7, now: 13
was: 8, now: 13
was: 9, now: 13
was: 10, now: 13
was: 11, now: 13
was: 12, now: 13
was: 14, now: 13
was: 15, now: 13
was: 16, now: 13
was: 17, now: 13
was: 18, now: 22
was: 19, now: 22
was: 20, now: 22
was: 21, now: 22
was: 23, now: 22
was: 24, now: 22
was: 25, now: 22
was: 26, now: 28
was: 27, now: 28
was: 29, now: 28
was: 30, now: 28
was: 31, now: 28
was: 32, now: 34
was: 33, now: 34
was: 35, now: 34
was: 36, now: 34
was: 37, now: 38
was: 39, now: 38
was: 40, now: 38
was: 41, now: 42
was: 43, now: 42
was: 44, now: 42
was: 45, now: 46
was: 47, now: 46
was: 48, now: 50
was: 49, now: 50
was: 51, now: 50
was: 52, now: 53
was: 54, now: 53
was: 55, now: 56
was: 57, now: 56
was: 58, now: 59
was: 60, now: 61
was: 62, now: 61
was: 63, now: 64
was: 65, now: 64
was: 67, now: 66
was: 68, now: 69
was: 70, now: 71
was: 72, now: 73
was: 74, now: 73
was: 76, now: 75
was: 78, now: 77
was: 80, now: 79
was: 82, now: 83
was: 84, now: 85
was: 87, now: 86
was: 89, now: 88
was: 91, now: 92
was: 94, now: 95
was: 97, now: 96
was: 100, now: 99
was: 103, now: 104
was: 107, now: 106
was: 111, now: 112
was: 116, now: 117
was: 123, now: 124
Where am I wrong?
Because you are quantizing the linear floating-point values by casting them to integer representation, the conversion will indeed be destructive.
However, there should be no surprise that this occurs: it is actually the intent when encoding data with the sRGB inverse electro-optical transfer function (EOTF). The purpose is to carry data over 8-bit signals while maintaining a good perceptual uniformity with an overall reduced bandwidth.
I would refer to Charles Poynton's Gamma Faq and Poynton, C., & Funt, B. (2014). Perceptual uniformity in digital image representation and display. Color Research and Application, 39(1), 6–15. https://doi.org/10.1002/col.21768
Note that "Conversation" should be "conversion" here.
Related
I have some problems with adpcm in .wav (sound)files.
at first of this question, I should say that I didn't read all things about ADPCM , just wanted to implement that fast and work on it ... (just training code)
I implemented it from MicroChip's pdf guid adpcm.(it's better to say copy/pasted and edited ,created class)
Test code:
const std::vector<int8_t> data = {
64, 67, 71, 75, 79, 83, 87, 91, 94, 98, 101, 104, 107, 110, 112,
115, 117, 119, 121, 123, 124, 125, 126, 126, 127, 127, 127, 126, 126, 125,
124, 123, 121, 119, 117, 115, 112, 110, 107, 104, 101, 98, 94, 91, 87,
83, 79, 75, 71, 67, 64, 60, 56, 52, 48, 44, 40, 36, 33, 29,
26, 23, 20, 17, 15, 12, 10, 8, 6, 4, 3, 2, 1, 1, 0,
0, 0, 1, 1, 2, 3, 4, 6, 8, 10, 12, 15, 17, 20, 23,
26, 29, 33, 36, 40, 44, 48, 52, 56, 60, 64};
void function() {
std::vector<uint8_t> en;
std::vector<uint8_t> de;
{ // encode
wave::ADPCM adpcm;
// 32768
for (size_t i{0}; i < data.size() - 3; i += 4) {
int16_t first{static_cast<int16_t>(
~((static_cast<uint16_t>(data[i]) & 0xff) |
((static_cast<uint16_t>(data[i + 1]) << 8) & 0xff00)) +
1)};
int16_t second{static_cast<int16_t>(
~((static_cast<uint16_t>(data[i + 2]) & 0xff) |
((static_cast<uint16_t>(data[i + 3]) << 8) & 0xff00)) +
1)};
en.push_back(static_cast<uint8_t>((adpcm.ADPCMEncoder(first) & 0x0f) |
(adpcm.ADPCMEncoder(second) << 4)));
}
}
{ // decode
wave::ADPCM adpcm;
for (auto val : en) {
int16_t result = ~adpcm.ADPCMDecoder(val & 0xf) + 1;
int8_t temp0 = ((result)&0xff);
int8_t temp1 = ((result)&0xff00) >> 8;
de.push_back(temp0);
de.push_back(temp1);
result = ~adpcm.ADPCMDecoder(val >> 4) + 1;
temp0 = ((result)&0xff);
temp1 = (result & 0xff00) >> 8;
de.push_back(temp0);
de.push_back(temp1);
}
}
int i{0};
for (auto val : de) {
qDebug() << "real:" << data[i] << "decoded: " << val;
i++;
}
}
I'm sure that my class and encode/decode is right ,just after decode I should do somethings to show correct numbers(but I donw know which casting is failed).
why I'm sure? because when I see my output in QDebug , every other sample(after decode) are correct (with few errors, in big datas errors will be smaller than now), but anothers are failed
my output:
real: 26 decoded: 6
real: 29 decoded: 32
real: 33 decoded: 5
real: 36 decoded: 48
real: 40 decoded: 6
real: 44 decoded: 32
real: 48 decoded: 5
real: 52 decoded: 48
real: 56 decoded: 4
real: 60 decoded: 64
data are 8bits in device
Ok , I found my answer
when you have error on any number , your error is in lower bits!
so my predict was on two numbers that was anded to gether , then that number is in lower bits position have much errors!
I'm still using C++14. So std::sample is out of reach. Is there something equivalent in boost? I do not want to copy my std::multiset which isn't reorderable.
As far as I know, there is not such a thing in boost. But you may write a simple one, yourself:
template<typename T>
std::vector<T> sample_items(const std::multiset<T> & ms, int samples)
{
std::vector<T> ret_value;
std::random_device rd;
std::mt19937 gen(rd());
std::uniform_int_distribution<> dis(0, ms.size() - 1);
for (int i = 0; i < samples; i++)
{
auto first = std::begin(ms);
auto offset = dis(gen);
std::advance(first, offset);
ret_value.push_back(*first);
}
return ret_value;
}
I do not want to copy my std::multiset which isn't reorderable.
If still prefer not to send your multiset to a function, just change the function in order to work with iterators.
UPDATE
Added a sequential draw algorithm that doesn't require additional storage by dynamically adjusting the probability for selecting the next item in sequence order.
See sequential_sample below
random_sample
I think the semantics of random_sample should be that you don't pick the same sequence element twice.
With multiset you could get duplicate values. Just use set if you don't want that.
To avoid duplicate picks you can generate a set of unique indices until the size matches n and then project the results:
A problem that lurks here is that when doing it naively, you might always return the results in the input order, which is definitely not what you want.
So, you could do a hybrid approach where you keep track of already picked elements. In this implementation I do that, while
optimizing the storage to avoid dynamic allocation (unless n is >10)
optimize the storage for locality of reference (cache friendliness)
also cache the iterators with the picked items, so that subsequent picks may optimize iterator traversal, instead always advancing from the start iterator
There are some more comments in the code, and I left in a few trace statements that may help in understanding how the algorithm and the optimizations operate.
Live On Coliru
#include <random>
#include <set>
#include <iostream>
#include <algorithm>
#include <iterator>
#include <boost/container/flat_set.hpp>
#include <boost/container/small_vector.hpp>
namespace my {
static std::ostream trace(std::clog.rdbuf()/* or: nullptr*/);
template <typename It, typename Out, typename URBG>
Out random_sample(It f, It l, Out out, size_t n, URBG& urbg) {
size_t const size = std::distance(f,l);
// adjust n for size (matches std::sample)
n = std::min(size, n);
// bind distribution to the random bit generator
auto pick = [&urbg,dist=std::uniform_int_distribution<size_t>(0, size-1)]() mutable {
return dist(urbg);
};
// Optimized storage of indices: works best for small n, probably still
// better than `std::set` for large n.
// IDEA: For very large n, prefer just a vector, sort+unique until n
// reached
//
// The loc field is a cached (forward) iterator so we reduce repeated
// traversals.
// IDEA: when It is of random iterator category, specialize without loc
// cache
struct P {
size_t idx; It loc;
bool operator<(P const& rhs) const { return idx < rhs.idx; }
};
namespace bc = boost::container;
bc::flat_set<P, std::less<P>, bc::small_vector<P, 10> > picked;
// generate n unique picks
while (n-->0) {
auto entry = [&] {
while (true) {
auto insertion = picked.insert({pick(), f});
if (insertion.second)
return insertion.first;
}
}();
trace << "accept pick: " << entry->idx << "\n";
// traverse and cache loc
if (entry == begin(picked)) {
// advance from scratcj
entry->loc = std::next(f, entry->idx);
} else {
// minimum steps from prior cached loc
auto& prior = *std::prev(entry);
trace << "using prior reference: " << prior.idx << "\n";
entry->loc = std::next(prior.loc, entry->idx - prior.idx);
}
// output
*out++ = *entry->loc;
}
return out;
}
} // namespace my
int main() {
std::multiset const pool {
0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
20, 21, 22, 23, 24, 25, 26, 27, 28, 29,
30, 31, 32, 33, 34, 35, 36, 37, 38, 39,
40, 41, 42, 43, 44, 45, 46, 47, 48, 49,
50, 51, 52, 53, 54, 55, 56, 57, 58, 59,
60, 61, 62, 63, 64, 65, 66, 67, 68, 69,
70, 71, 72, 73, 74, 75, 76, 77, 78, 79,
80, 81, 82, 83, 84, 85, 86, 87, 88, 89,
90, 91, 92, 93, 94, 95, 96, 97, 98, 99,
};
std::mt19937 engine(std::random_device{}());
for (int i = 0; i<3; ++i) {
my::random_sample(
pool.begin(), pool.end(),
std::ostream_iterator<int>(std::cout << "-- random draw (n=3): ", " "),
3,
engine);
std::cout << "\n";
}
}
Prints, e.g.:
accept pick: 46
accept pick: 98
using prior reference: 46
accept pick: 55
using prior reference: 46
accept pick: 80
accept pick: 12
accept pick: 20
using prior reference: 12
accept pick: 63
accept pick: 80
using prior reference: 63
accept pick: 29
-- random draw (n=3): 46 98 55
-- random draw (n=3): 80 12 20
-- random draw (n=3): 63 80 29
sequential_sample
As announced at the top, if the results being in input-order is not an issue, you can be much more efficient and require no storage at all:
template <typename It, typename Out, typename URBG>
Out sequential_sample(It f, It l, Out out, size_t n, URBG&& urbg) {
using D = std::uniform_int_distribution<size_t>;
size_t size = std::distance(f, l);
n = std::min(n, size);
D dist;
for (; n != 0; ++f) {
if (dist(urbg, D::param_type{ 0, --size }) >= n)
continue;
*out++ = *f;
--n;
}
return out;
}
This program combines random_sample and sequential_sample and demonstrates the difference in results:
Live On Coliru
#include <random>
#include <algorithm>
namespace my {
template <typename It, typename Out, typename URBG>
Out sequential_sample(It f, It l, Out out, size_t n, URBG&& urbg) {
using D = std::uniform_int_distribution<size_t>;
size_t size = std::distance(f, l);
n = std::min(n, size);
D dist;
for (; n != 0; ++f) {
if (dist(urbg, D::param_type{ 0, --size }) >= n)
continue;
*out++ = *f;
--n;
}
return out;
}
}
#include <boost/container/flat_set.hpp>
#include <boost/container/small_vector.hpp>
namespace my {
template <typename It, typename Out, typename URBG>
Out random_sample(It f, It l, Out out, size_t n, URBG& urbg) {
using Dist = std::uniform_int_distribution<size_t>;
size_t const size = std::distance(f,l);
// adjust n for size (matches std::sample)
n = std::min(size, n);
// bind distribution to the random bit generator
auto pick = [&urbg,dist=Dist(0, size-1)]() mutable {
return dist(urbg);
};
// Optimized storage of indices: works best for small n, probably still
// better than `std::set` for large n.
// IDEA: For very large n, prefer just a vector, sort+unique until n
// reached
//
// The loc field is a cached (forward) iterator so we reduce repeated
// traversals.
// IDEA: when It is of random iterator category, specialize without loc
// cache
struct P {
size_t idx; It loc;
bool operator<(P const& rhs) const { return idx < rhs.idx; }
};
namespace bc = boost::container;
bc::flat_set<P, std::less<P>, bc::small_vector<P, 10> > picked;
// generate n unique picks
while (n-->0) {
auto entry = [&] {
while (true) {
auto insertion = picked.insert({pick(), f});
if (insertion.second)
return insertion.first;
}
}();
// traverse and cache loc
if (entry == begin(picked)) {
// advance from scratcj
entry->loc = std::next(f, entry->idx);
} else {
// minimum steps from prior cached loc
auto& prior = *std::prev(entry);
entry->loc = std::next(prior.loc, entry->idx - prior.idx);
}
// output
*out++ = *entry->loc;
}
return out;
}
} // namespace my
#include <set>
#include <iostream>
#include <iterator>
int main() {
std::multiset<int> const pool {
0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
20, 21, 22, 23, 24, 25, 26, 27, 28, 29,
30, 31, 32, 33, 34, 35, 36, 37, 38, 39,
40, 41, 42, 43, 44, 45, 46, 47, 48, 49,
50, 51, 52, 53, 54, 55, 56, 57, 58, 59,
60, 61, 62, 63, 64, 65, 66, 67, 68, 69,
70, 71, 72, 73, 74, 75, 76, 77, 78, 79,
80, 81, 82, 83, 84, 85, 86, 87, 88, 89,
90, 91, 92, 93, 94, 95, 96, 97, 98, 99,
};
std::mt19937 engine(std::random_device{}());
constexpr int N = 10;
for (int i = 0; i<N; ++i) {
my::sequential_sample(
pool.begin(), pool.end(),
std::ostream_iterator<int>(std::cout << "-- sequential draw (n=3): ", " "),
3,
engine);
std::cout << "\n";
}
for (int i = 0; i<N; ++i) {
my::random_sample(
pool.begin(), pool.end(),
std::ostream_iterator<int>(std::cout << "-- random draw (n=3): ", " "),
3,
engine);
std::cout << "\n";
}
}
Prints e.g.
-- sequential draw (n=3): 14 66 71
-- sequential draw (n=3): 24 26 30
-- sequential draw (n=3): 19 34 65
-- sequential draw (n=3): 16 41 49
-- sequential draw (n=3): 15 25 37
-- sequential draw (n=3): 15 49 84
-- sequential draw (n=3): 12 53 88
-- sequential draw (n=3): 46 70 94
-- sequential draw (n=3): 32 51 56
-- sequential draw (n=3): 32 37 95
-- random draw (n=3): 15 38 35
-- random draw (n=3): 61 64 58
-- random draw (n=3): 4 37 93
-- random draw (n=3): 0 43 84
-- random draw (n=3): 58 52 59
-- random draw (n=3): 81 43 3
-- random draw (n=3): 41 30 89
-- random draw (n=3): 58 9 84
-- random draw (n=3): 15 39 27
-- random draw (n=3): 74 27 9
Im having an issue using opencv trying to convert an image to an array. The conversion works however i seem to have incorrect dimensions in the resulting array:
#include <opencv2/opencv.hpp>
int main()
{
auto img = cv::imread("test.jpg", CV_LOAD_IMAGE_COLOR);
std::cout << "img cols: " << img.cols << " img rows: "
<< img.rows << " channels: " << img.channels() << std::endl;
std::vector<float> array2;
if (img.isContinuous()) {
array2.assign((float*)img.ptr(0), (float*)(img.ptr(img.rows - 1)) + img.cols);
std::cout << array2.size() << "\n";
}
return 0;
}
The output from the first print line results in :
img cols: 416 img rows: 416 channels: 3
Which is correct, however after assigning the data to the array the dimensions are : 518336 , when they should be 519168 (416*416*3).
Could anyone possibly suggest what exactly is causing the resulting array to be smaller than expected?
There are several problems with your code:
First of all, cv::imread("test.jpg", CV_LOAD_IMAGE_COLOR); will (on success) return a cv::Mat with datatype CV_8UC3, however you're accessing the elements as floats. This means that the values you will read will be garbage, and you will also end up reading past the end of the pixel buffer.
If you want floats, then you need to do some conversion/casting, either before or during the act of copying.
The second problem lies in your calculation of the "end" pointer, where you seem to forget that you're dealing with a multi-channel cv::Mat. In case of a CV_8UC3 matrix, each pixel is represented by 3 bytes, hence there are cols*channels bytes per row. (That's why you're short by 2*416 elements)
Not really a problem, but a limitation -- your code only works for continuous Mats.
I would take a somewhat different approach, and take advantage of functionality provided by OpenCV.
Option 1
Use cv::Mat::copyTo, since OutputArray can wrap a std::vector<T>. However, for this to work, the source Mat needs to have 1 channel and 1 row. We can achieve this efficiently using cv::Mat::reshape, but the Mat needs to be continuous, so that limitation stays.
std::vector<uchar> to_array_v1(cv::Mat3b const& img)
{
std::vector<uchar> a;
if (img.isContinuous()) {
img.reshape(1, 1).copyTo(a);
}
return a;
}
Option 2
Use MatIterators which we can get using cv::Mat::begin and cv::Mat::end. The iterators will work correctly even on a non-continuous Mat, however we need them to iterate over bytes, so we need to reshape the matrix to a single channel one. Since we're not changing the number of rows, the reshape will also work on a non-continuous Mat.
std::vector<uchar> to_array_v2(cv::Mat3b const& img)
{
cv::Mat1b tmp(img.reshape(1));
return std::vector<uchar>(tmp.begin(), tmp.end());
}
Option 3
The approach suggested by Silencer, using the rather poorly documented cv::Mat::datastart and cv::Mat::dataend members. The documentation of cv::Mat::locateROI sheds some more light on the meaning of those member variables:
However, each submatrix contains information (represented by datastart and dataend fields) that helps reconstruct the original matrix size and the position of the extracted submatrix within the original matrix.
This means that this approach has 2 limitations: it needs a continous matrix, and it won't work correctly for a submatrix, even if it's continuous. (Specifically, for a continuous submatrix, it would return the entire buffer of the "parent" matrix)
std::vector<uchar> to_array_v3(cv::Mat3b const& img)
{
std::vector<uchar> a;
if (img.isContinuous() && !img.isSubmatrix()) {
a.assign(img.datastart, img.dataend);
}
return a;
}
Test Code
#include <opencv2/opencv.hpp>
#include <iostream>
#include <numeric>
#include <vector>
// Paste implementations from the answer here
cv::Mat3b test_image()
{
cv::Mat1b m(4, 4);
std::iota(m.begin(), m.end(), 0);
cv::Mat3b img;
cv::merge(std::vector<cv::Mat1b>{ m * 3, m * 3 + 1, m * 3 + 2 }, img);
return img;
}
void print(cv::Mat3b const& img)
{
std::cout << "Continuous: " << (img.isContinuous() ? "yes" : "no") << '\n';
std::cout << "Submatrix: " << (img.isSubmatrix() ? "yes" : "no") << '\n';
std::cout << img << "\n";
}
void print(std::vector<uchar> const& a)
{
if (a.empty()) {
std::cout << "empty";
} else {
for (auto n : a) {
std::cout << int(n) << ' ';
}
}
std::cout << "\n";
}
void test(cv::Mat3b const& img)
{
print(img);
print(to_array_v1(img));
print(to_array_v2(img));
print(to_array_v3(img));
}
int main()
{
cv::Mat3b img(test_image());
test(img);
cv::Mat3b img2(img(cv::Rect(0, 0, 3, 3)));
test(img2);
cv::Mat3b img3(img(cv::Rect(1, 1, 3, 1)));
test(img3);
return 0;
}
Running this program will produce the following output:
Continuous: yes
Submatrix: no
[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11;
12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23;
24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35;
36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
Continuous: no
Submatrix: yes
[ 0, 1, 2, 3, 4, 5, 6, 7, 8;
12, 13, 14, 15, 16, 17, 18, 19, 20;
24, 25, 26, 27, 28, 29, 30, 31, 32]
empty
0 1 2 3 4 5 6 7 8 12 13 14 15 16 17 18 19 20 24 25 26 27 28 29 30 31 32
empty
Continuous: yes
Submatrix: yes
[ 15, 16, 17, 18, 19, 20, 21, 22, 23]
15 16 17 18 19 20 21 22 23
15 16 17 18 19 20 21 22 23
empty
Mat img = imread("test.png");
std::vector<uchar> arr;
// convert Mat of CV_8UC3 to std::vector<uchar> if continuous
if(img.isContinuous()){
arr.assign(img.datastart, img.dataend);
}
I'm encoding a byte array into qr code using libqrencode and than try to decode it using zbar library. the programming language is c++.
The problem occurs when the values are >=128. for example when I decode the qr code which contains the following values:
unsigned char data[17]={111, 127, 128, 224, 255, 178, 201,200, 192, 191,22, 17,20, 34, 65 ,23, 76};
symbol->get_data_length() return 25 instead of 17 and when I tried to print the values using this small piece of code:
string input_data = symbol->get_data();
for(int k=0; k< 25; k++)
cout<< (int)((unsigned char)input_data[k])<<", ";
I got the following result:
111, 127, 194, 128, 195, 160, 195, 191, 194, 178, 195, 137, 195, 136, 195, 128, 194, 191, 22, 17, 20, 34, 65, 23, 76,
So as we can notice the values < 128 didn't effected but I got two bytes for every value >=128.
Also I printed the values without casting to unsigned char:
for(int k=0; k< 25; k++)
cout<< (int)input_data[k]<<", ";
and the result is:
111, 127, -62, -128, -61, -96, -61, -65, -62, -78, -61, -119, -61, -120, -61, -128, -62, -65, 22, 17, 20, 34, 65, 23, 76
I solve this problem by the following code:
void process_zbar_output(const string & input_data, vector<unsigned char> & output_data)
{
for (int i = 0; i < input_data.length(); i++)
{
int temp = (int) input_data[i];
// if the original value is >=128 we need to process it to get the original value
if (temp < 0)
{
// if the number is 62 than the original is between 128 and 191
// if the number is 61 than the original is between 192 and 255
if (temp == -62)
output_data.push_back(256 + ((int) input_data[i + 1]));
else
output_data.push_back(256 + ((int) input_data[i + 1] + 64));
i++;
}
else
{
output_data.push_back( input_data[i]);
}
}
}
Can anybody help me with this problem and explain why I got these extra bytes?
how i create an array of arrays, from One-dimensional array of Series, example:
I have an array like :
long int arr[20] = {23, 91, -71, -63, 22, 55, 51, 73, 17, -19,-65, 44, 95, 66, 82, 85, 97, 30, 54, -34};
and i want to create array of arrays in ascending order like: (in c++)
23, 91
-71, -63, 22, 55
51, 73
17
-19
-65, 44, 95
66, 82, 85, 97
30, 54
-34
already tried to now how many array there are
int sum=0;
for(int i=0;i<n-1;i++)
if(arr[i]>arr[i+1])sum++;
return sum;
int sum=0;
for(int i=0;i<n-1;i++)
if(arr[i]>arr[i+1])sum++;
return sum;
should be
int sum = 0;
if (n > 0)
{
for (int i = 0; i < n - 1; i++)
if (arr[i] > arr[i+1])
sum++;
sum++;
}
Your version doesn't count the last sequence of ascending numbers.
That's a start, what you have to do next is allocate enough memory for a pointer to each row. Then you go through the numbers again, count the length of each row, allocate the memory for that row, and then go through that row again, copying the numbers for that row. It's just loops (inside loops). Have a go and post the code if you get stuck.
How about creating vector of vector instead of array of array? In array you have to determine size which will cause you, either index out of bound exception or huge space lost. If you use vector, you are not going to determine size of vector, it will allocate more space as you fill the size of vector.
If your initial array is descending order, your double array size will be nx1, if it is ascending order, then 1xn so you have to make your double array nxn not to give exception which is inacceptible when n > 10^4 (approximately).
Some basic syntax of vector of vector is as follows:
vector<vector<int>> myvect; //initialization
myvect.at(i).at(j) = x; //reaching i_th row, j_th col element
myvect.at(0).push_back(1); //add element to the end of the row0.
This website looks nice about explaining vectors.
Here is sample code, I didn't test it so there might be small syntax erra,
vector<vector<int>> myvect; //initialization
int size = 20;
long int arr[size] = {23, 91, -71, -63, 22, 55, 51, 73, 17, -19,-65, 44, 95, 66, 82, 85, 97, 30, 54, -34};
int row = -1; int val = arr[0]+1;
for(int i = 0; i < size; i++){
if(arr[i] < val){
row++;
myvect.push_back(vector<int> () );
}
myvect.at(row).push_back(arr[i]);
}
The code is basically like this.
You could do
array[9][4] = {
{ 23, 91 },
{ -71, -63, 22, 55 },
{ 51, 73 },
{ 17 },
{ -19 },
{ -65, 44, 95 },
{ 66, 82, 85, 97 },
{ 30, 54 }
{ -34 }
};