Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I need quick advice for this code it does not compile and i cant figure out whats wrong with it. I am just trying to print the elements in the array in PORT C which is my output port. Thanks
#include <htc.h>
#define _XTAL_FREQ 500000
void main()
{
int x[8]={0b1110, 0b1010, 0b1000, 0b1001, 0b0001, 0b0101, 0b0111, 0b0110, 0b1110};
int i,PORTC;
TRISC = 0 ; // set PORTC as OUTPUT
PORTC = 0b0000;
for(;;){ // forever
for ( i = 0; i < 8; i++ ){
PORTC = n[ i ] = i + 1; /* set element at location i to i + 1 */
__delay_ms(500); }
}
}
You reference n[ i ], when you apparently mean x[ i ].
You really shouldn't declare PORTC as a local int, it's supposed to be a special "variable" that mirrors the hardware register. You might need some processor-specific include(s) too, not sure.
In the absence of someone who actually knows something about the PIC, I suggest you try something like this:
void main() {
int x[8]={0b1110, 0b1010, 0b1000, 0b1001, 0b0001, 0b0101, 0b0111, 0b0110, 0b1110};
int i;
TRISC = 0;
PORTC = 0b0000;
for(;;) {
for (i = 0; i < 8; i++) {
PORTC = x[i];
__delay_ms(500); }
}
}
}
TRISC is the control port for PORTC. A value of zero sets all pins on PORTC to be output ports. [thanks - see comment]
PORTC is an input/output port. I assume it's hooked up to a display of some kind. With the proper setting of TRISC it should act as an output port.
This should output 8 values at half-second intervals from the table to the port and repeat forever. Kind of "hello world" for microprocessors.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I am working on the following lines of code:
#define WDOG_STATUS 0x0440
#define ESM_OP 0x08
and in a method of my defined class I have:
bool WatchDog = 0;
bool Operational = 0;
unsigned char i;
ULONG TempLong;
unsigned char Status;
TempLong.Long = SPIReadRegisterIndirect (WDOG_STATUS, 1); // read watchdog status
if ((TempLong.Byte[0] & 0x01) == 0x01)
WatchDog = 0;
else
WatchDog = 1;
TempLong.Long = SPIReadRegisterIndirect (AL_STATUS, 1);
Status = TempLong.Byte[0] & 0x0F;
if (Status == ESM_OP)
Operational = 1;
else
Operational = 0;
What SPIReadRegisterInderect() does is, it takes an unsigned short as Address of the register to read and an unsigned char Len as number of bytes to read.
What is baffling me is Byte[]. I am assuming that this is a method to separate some parts of byte from the value in Long ( which is read from SPIReadRegisterIndirect ). but why is that [0]? shouldn't it be 1? and how is that functioning? I mean if it is isolating only one byte, for example for the WatchDog case, is TempLong.Byte[0] equal to 04 or 40? (when I am printing the value before if statement, it is shown as 0, which is neither 04 nor 40 from WDOG_STATUS defined register.)
Please consider that I am new to this subject. and I have done google search and other searchs but unfortunatly I could not found what I wanted. Could somebody please help me to understand how this syntax works or direct me to a documentation where I can read about it?
Thank you in advance.
Your ULONG must be defined somewhere.
Else you'd get the syntax error 'ULONG' does not name a type
Probably something like:
typedef union {unsigned long Long; byte Byte[4];} ULONG;
Check union ( and typedef ) in your C / C++ book, and you'll see that
this union helps reinterpreting the long variable as an array of bytes.
Byte[0] is the first byte. Depending on the hardware (avr controllers are little endian) it's probably the LSB (least significant byte)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I just wanted to see how subTree array is changing while i am iterating over dfs() function.
Here is the code:
#include<bits/stdc++.h>
using namespace std;
#define w(x) int x; cin>>x; while(x--)
#define nl "\n"
#define fr(i,t) for(int i=0;i<t;i++)
#define fr1(i,a,b) for(int i = a; i<b; i++)
#define frr(i,n) for(int i = n; i>=0; i--)
#define frr1(i,a,b) for(int i = a; i>=b; i--)
#define dbug(x) cout<<#x<<"="<<x<<endl;
#define fast ios_base::sync_with_stdio(false);cin.tie(NULL);cout.tie(NULL);
#define pb push_back
// int -10**9 to 10**9 4 byte -2**31 to +2**31 -2147483647 to +2147483648
// long long int -10**19 to 10**19 8 byte -2**63 to +2**63
// unsigned long long int -0 to 10**20 8 byte 0 to +2**64
// INT_MAX 0x7fffffff 2147483647
const int M1 = 1000000007;
const int M2 = 998244353;
const int N = 100005;
vector<int> g[N];
int subTree[N];
bool vis[N];
int dfs(int u){
vis[u] = true;
if(g[u].size() == 1){ // Leaf Node
subTree[u] = 1;
return 1;
}
for(auto &v: g[u]){
if(!vis[v]) subTree[u] += dfs(v);
}
return ++subTree[u];
}
int main(){
fast;
int n,m,k,a,b,temp;
cin>>n>>m;
fr(i,m){
cin>>a>>b;
g[a].pb(b);
g[b].pb(a);
}
dfs(1);
fr1(i,1,8){
cout<<subTree[i]<<" ";
}
}
" ... how to watch whats happening ..."
Whenever my understanding of gdb falls short, I do not hesitate to add a (probably temporary) viewing mechanism to cout useful 'state-info'.
Consider:
add a "std::stringstream ssDbg;" above " int dfs(int u) ", but in scope,
add one or more statements (inside of dfs) to contribute information to ssDbg. They each have the form
"ssDbg << " [... usefull status info ...] " << endl" .
set a breakpoint (or maybe 2) inside of dfs(int).
then, when you want to inspect the behavior as reflected in the ssDbg contents
add one (or maybe 2) small 'c-style' functions to show whats up. I use a c-style function (i.e. not a class function attribute) because gdb handles c-style-functions better and are easier to invoke. Yes, c-style can access your class, and you can even declare these support functions a friend.
4.a) Your functions will at least display what has been captured, i.e.
void coutCapture() { cout << ssDbg.str() << endl; }; "
4.b) Your functions might display other current state info (i.e. do not limit your outputs to just the capture contents.)
4.c You might want each coutCapture() display effort to also clear and reset the ssDbg,
4.d or you might want a separate ssClr() to let the contents build.
I use the following:
void ssClr(stringstream& ss) { ss.str(string()); ss.clear(); }
// clear data clear flags
Summary: "coutCapture()" and the "ssDbg << ..." instrumentation are augmenting gdb in a customized way. I usually find gdb sufficient.
I developed this technique on embedded systems, because of various and sometimes unique limitations.
Also, be sure to review gdb documentation... every time I look I find more things to try.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I was reading a interesting post about why is it faster to process a sorted array than an unsorted array? and saw a comment made by #mp31415 that said:
Just for the record. On Windows / VS2017 / i7-6700K 4GHz there is NO difference between two versions. It takes 0.6s for both cases. If number of iterations in the external loop is increased 10 times the execution time increases 10 times too to 6s in both cases
So I tried it on a online c/c++ compiler (with, I suppose, modern server architecture), I get, for the sorted and unsorted, respectively, ~1.9s and ~1.85s, not so much different but repeatable.
So I wonder if it is still true for modern architectures?
Question was from 2012, not so far from now...
Or where am I wrong?
Question precision for reopening:
Please forget about me adding the C code as example. This was a terrible mistake. Not only erroneous was the code, posting it misled people who were focusing on the code itself, rather than on the question.
When I tried first the C++ code used in the link above and got only 2% difference (1.9s & 1.85s).
My first question and intent was about the previous post, its c++ code and the comment of #mp31415.
#rustyx made an interesting comment, and I wondered if it could explain what I observed.
Interestingly, a debug build exhibits 400% difference between sorted/unsorted, and a release build at most 5% difference (i7-7700).
In other words, my question is:
Why does the c++ code in the previous post did not worked with as good performances as those claimed by the previous OP?
precised by:
Does the timing difference between the release build and debug build could explain it?
You're a victim of the as-if rule:
... conforming implementations are required to emulate (only) the observable behavior of the abstract machine ...
Consider the function under test ...
const size_t arraySize = 32768;
int *data;
long long test()
{
long long sum = 0;
for (size_t i = 0; i < 100000; ++i)
{
// Primary loop
for (size_t c = 0; c < arraySize; ++c)
{
if (data[c] >= 128)
sum += data[c];
}
}
return sum;
}
And the generated assembly (VS 2017, x86_64 /O2 mode)
The machine does not execute your loops, instead it executes a similar program that does this:
long long test()
{
long long sum = 0;
// Primary loop
for (size_t c = 0; c < arraySize; ++c)
{
for (size_t i = 0; i < 20000; ++i)
{
if (data[c] >= 128)
sum += data[c] * 5;
}
}
return sum;
}
Observe how the optimizer reversed the order of the loops and defeated your benchmark.
Obviously the latter version is much more branch-predictor-friendly.
You can in turn defeat the loop hoisting optimization by introducing a dependency in the outer loop:
long long test()
{
long long sum = 0;
for (size_t i = 0; i < 100000; ++i)
{
sum += data[sum % 15]; // <== dependency!
// Primary loop
for (size_t c = 0; c < arraySize; ++c)
{
if (data[c] >= 128)
sum += data[c];
}
}
return sum;
}
Now this version again exhibits a massive difference between sorted/unsorted data. On my system (i7-7700) 1.6s vs 11s (or 700%).
Conclusion: branch predictor is more important than ever these days when we are facing unprecedented pipeline depths and instruction-level parallelism.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
The scenario is like this: one process is using epoll on several sockets, all sockets are set non-blocking and edge triggered; then EPOLLIN event occurs on one socket, then we start to read data on its fd, but the problem is that there are too many data coming in, and in the while loop reading data, the return value of recv is always larger than 0. So the application is stuck there, reading data and cannot move on.
Any idea how should I deal with this?
constexpr int max_events = 10;
constexpr int buf_len = 8192;
....
epoll_event events[max_events];
char buf[buf_len];
int n;
auto fd_num = epoll_wait(...);
for(auto i = 0; i < fd_num; i++) {
if(events[i].events & EPOLLIN) {
for(;;) {
n = ::read(events[i].data.fd, buf, sizeof(buf));
if (errno == EAGAIN)
break;
if (n <= 0)
{
on_disconnect_(events[i].data.fd);
break;
}
else
{
on_data_(events[i].data.fd, buf, n);
}
}
}
}
When using edge triggered mode the data must be read in one recv call, otherwise it risks starving other sockets. This issue has been written about in numerous blogs, e.g. Epoll is fundamentally broken.
Make sure that your user-space receive buffer is at least the same size as the kernel receive socket buffer. This way you read the entire kernel buffer in one recv call.
Also, you can process ready sockets in a round-robin fashion, so that the control flow does not get stuck in recv loop for one socket. That works best with the user-space receive buffer being of the same size as the kernel one. E.g.:
auto n = epoll_wait(...);
for(int dry = 0; dry < n;) {
for(auto i = 0; i < n; i++) {
if(events[i].events & EPOLLIN) {
// Do only one read call for each ready socket
// before moving to the next ready socket.
auto r = recv(...);
if(-1 == r) {
if(EAGAIN == errno) {
events[i].events ^= EPOLLIN;
++dry;
}
else
; // Handle error.
}
else if(!r){
// Process client disconnect.
}
else {
// Process data received so far.
}
}
}
}
This version can be further improved to avoid scanning the entire events array on each iteration.
In you original post do {} while(n > 0); is incorrect and it leads to an endless loop. I assume it is a typo.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I have an array with data from the adc, for this data I need to save 10 periods per second in the sd card, to detect the periods I use a zerocrossing function then I add one to a variable averytime it crosses zero( changes sings) and write a file, I have two problem, first is seting up the timer to send 10 periods of data every second. My second problem is that I just want to send those 10 period of data, have a break and continue sending the next 10 periods.
this code works, my question is how can I implement the timing, How can I send 10 periods in 1 second?
void zeroCrossing(float* data, float* zerCross, int nx)
{
int i;
int a = 0;
bool sign1, sign2;
memset(zerCross, 0, nx * sizeof(float)); //copies the 0 to the first characters of the string
//pointed to, by argument
for (i = 0; i < nx - 1; i++) {
float b[16]; /* loop over data */
b[i] = data[i];
sign1 = getSign(data[i]);
sign2 = getSign(data[i + 1]);
if (sign1 != sign2) { /* set zero crossing location */
zerCross[i + 1] = 1;
a++;
// break ;
// continue;
if (a == 10) { // 10
break;
}
}
}
//cout << a<<endl;
}
/* get sign of number */
bool getSign(float data)
{
if (data > 0) /* positif data */
return (1);
else /* negatif data */
return (0);
}
One possibility is using sleep_until:
auto timestamp = std::chrono::high_resolution_clock::now();
for(;;)
{
doTheStuff();
timestamp += std::chrono::milliseconds(100);
std::this_thread::sleep_until(timestamp);
}
While this could be achieved similarly with sleep_for with slightly less effort, above approach is slightly more precise because it considers calculation time of whatever you do.
For small tasks, such precision might be irrelevant as sleeping precision itself is likely to be worse anyway...