ARM processor, Debian, error: expected primary-expression before '.' token - c++

Trying to perform SPI communication on ARM processor running on Linux Debian.
Unable to compile the code below (error: expected primary-expression before '.' token)
g++ comm.cpp -o comm
I am not sure how I can compile this code Is there something I am missing ? Thank you.
struct spi_ioc_transfer tr = {
.tx_buf = (unsigned long)tx,
.rx_buf = (unsigned long)rx,
.len = ARRAY_SIZE(tx),
.delay_usecs = delay,
.speed_hz = speed,
.bits_per_word = bits
};

Related

LDAP connection with AD using C++

I'm trying to just make an LDAP connection with Active directory to get the list of users. But I'm not even able to compile the simple code for just authentication with the AD using C++.
I have tried many C++ example programs but only got compilation errors. I really just want to connect with AD using C++ without any errors. So can you please tell me what I'm doing wrong in this code which attempts to add a new user to the AD. I have added the environment details, code and errors below for reference.
CODE:
#ifndef UNICODE
#define UNICODE
#endif
#pragma comment(lib, "netapi32.lib")
#include <windows.h>
#include <lm.h>
#include<iostream>
int main()
{
USER_INFO_1 ui;
DWORD dwLevel = 1;
DWORD dwError = 0;
NET_API_STATUS nStatus;
//
// Set up the USER_INFO_1 structure.
// USER_PRIV_USER: name identifies a user,
// rather than an administrator or a guest.
// UF_SCRIPT: required
//
ui.usri1_name = L"username";
ui.usri1_password = L"password";
ui.usri1_priv = USER_PRIV_USER;
ui.usri1_home_dir = NULL;
ui.usri1_comment = NULL;
ui.usri1_flags = UF_SCRIPT;
ui.usri1_script_path = NULL;
//
// Call the NetUserAdd function, specifying level 1.
//
nStatus = NetUserAdd(L"servername",
dwLevel,
(LPBYTE)&ui,
&dwError);
//
// If the call succeeds, inform the user.
//
if (nStatus == NERR_Success)
fwprintf(stderr, L"User %s has been successfully added on %s\n",
L"user", L"dc");
//
// Otherwise, print the system error.
//
else
fprintf(stderr, "A system error has occurred: %d\n", nStatus);
return 0;
}
ERROR:
PS C:\Users\user\Desktop\Sandbox\Cpp> cd "c:\Users\user\Desktop\Sandbox\Cpp\" ; if ($?) { g++ ldap.cpp -o ldap } ; if ($?) { .\ldap }
ldap.cpp: In function 'int main()':
ldap.cpp:22:20: warning: ISO C++ forbids converting a string constant to 'LPWSTR' {aka 'wchar_t*'} [-Wwrite-strings]
22 | ui.usri1_name = L"username";
| ^~~~~~~~~~~
ldap.cpp:23:24: warning: ISO C++ forbids converting a string constant to 'LPWSTR' {aka 'wchar_t*'} [-Wwrite-strings]
23 | ui.usri1_password = L"password";
| ^~~~~~~~~~~
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\user-1~1\AppData\Local\Temp\ccByZfCT.o:ldap.cpp:(.text+0xfb): undefined reference to `NetUserAdd'
collect2.exe: error: ld returned 1 exit status
My system run on Windows 10 64bit
Installed MSYS with MinGW64 compiler.
I'm no C++ or MinGW expert, but I have a little experience, and I did some Googling. This is the only error:
undefined reference to `NetUserAdd'
The others are warnings.
By your output, it looks like your command to compile is this:
g++ ldap.cpp -o ldap
Try adding -lnetapi32 to the end of that:
g++ ldap.cpp -o ldap -lnetapi32
If you want to resolve those warnings, I think you can declare variables for the username and password rather than assigning literals directly to the struct:
wchar_t username[] = L"username";
wchar_t password[] = L"password";
ui.usri1_name = username;
ui.usri1_password = password;

Visual Studio Linux cannot find host global?

Attempting to build netcode server.c example using visual studio for linux. During compilation I am receiving the following error:
netcode.c(5194,24): error : ‘CLOCK_MONOTONIC_RAW’ undeclared (first use in this function)
Which is coming from the following source:
// linux
#include <unistd.h>
void netcode_sleep( double time )
{
struct timespec ts;
ts.tv_sec = (time_t) time;
ts.tv_nsec = (long) ((time - (double) ( ts.tv_sec )) * 1000000000.0);
nanosleep( &ts, NULL );
}
double netcode_time()
{
static double start = -1;
if ( start == -1 )
{
struct timespec ts;
clock_gettime( CLOCK_MONOTONIC_RAW, &ts );
start = ts.tv_sec + ( (double) ( ts.tv_nsec ) ) / 1000000000.0;
return 0.0;
}
struct timespec ts;
clock_gettime( CLOCK_MONOTONIC_RAW, &ts );
double current = ts.tv_sec + ( (double) ( ts.tv_nsec ) ) / 1000000000.0;
return current - start;
}
I have successfully built this project on windows, as well as DIRECTLY on a linux machine before with no issue. I'm a little perplexed as I wasn't expecting to run into any issues since from what I can tell visual studio for linux is really just syncing sources with a linux host and sending a g++ command for everything to be compiled on the host...so not real sure why this is happening. Any ideas?
EDIT
Solution was to include #define _POSIX_C_SOURCE 199309L in the source file of the library. For some reason this is not required when building directly on the linux machine, but visual studio for linux is requiring it. I'm guessing that maybe the compile command generated by visual studio for linux is possibly doing something to ommit the build from using _POSIX_C_SOURCE? Maybe? (making this assumption based on the fact the project built successfully directly on the linux machine using a basic g++ command: g++ server.c netcode.a -lsodium)
Can anyone identify such a switch/option that would be causing that from the below (which is generated by visual studio)
"g++" -W"switch"
-W"no-deprecated-declarations"
-W"empty-body"
-W"conversion"
-W"return-type"
-W"parentheses"
-W"no-pointer-sign"
-W"no-format"
-W"uninitialized"
-W"unreachable-code"
-W"unused-function"
-W"unused-value"
-W"unused-variable"
-std=c++11
-Wall -fno-strict-aliasing
-g2 -gdwarf-2 "g++" -O0 "3600000" -fthreadsafe-statics
-W"switch"
-W"no-deprecated-declarations"
-W"empty-body"
-W"conversion"
-W"return-type"
-W"parentheses"
-W"no-format"
-W"uninitialized"
-W"unreachable-code"
-W"unused-function"
-W"unused-value"
-W"unused-variable"
-frtti -fno-omit-frame-pointer -std=c11 -fexceptions -o "C:\dev\project<different options>

parse error before ')' token in wrapper function

I'm trying to compile the following wrapper function for a system call for my Operating Systems class and I keep getting the following compilation error.
Just to clarify, this code is from a HW assignment where we needed to add some functionality to the task_struct. It's Linux 2.4 running on a VM.
syscall_files.h: In function `get_all_events_number`:
syscall_files.h:58: parse error before ')' token
int get_all_events_number(){
long __res;
__asm__ volatile (
"movl $245, %%eax;"
"int $0x80;"
"movl %%eax, %0"
: "=m" (__res)
: "%eax"
); << line 58
if((unsigned long)(__res) >= (unsigned long)(-125)) {
errno = -(__res);
__res = -1;
}
return (int)(__res);
}
Can anyone see the the problem? I've been trying to figure it out for the past 30 mintues and I have no idea what's wrong.

Weird and unpredictable crash when using libx264 cross-compiled with MinGW

I'm working on a C++ project using Visual Studio 2010 on Windows. I'm linking dynamically against x264 which I built myself as a shared library using MinGW following the guide at
http://www.ayobamiadewole.com/Blog/Others/x264compilation.aspx
The strange thing is that my x264 code is working perfectly sometimes. Then when I change some line of code (or even change the comments in the file!) and recompile everything crashes on the line
encoder_ = x264_encoder_open(&param);
With the message
Access violation reading location 0x00000000
I'm not doing anything funky at all so it's probably not my code that is wrong but I guess there is something going wrong with the linking or maybe something is wrong with how I compiled x264.
The full initialization code:
x264_param_t param = { 0 };
if (x264_param_default_preset(&param, "ultrafast", "zerolatency") < 0) {
throw KStreamerException("x264_param_default_preset failed");
}
param.i_threads = 1;
param.i_width = 640;
param.i_height = 480;
param.i_fps_num = 10;
param.i_fps_den = 1;
encoder_ = x264_encoder_open(&param); // <-----
if (encoder_ == 0) {
throw KStreamerException("x264_encoder_open failed");
}
x264_picture_alloc(&pic_, X264_CSP_I420, 640, 480);
Edit: It turns out that it always works in Release mode and when using superfast instead of ultrafast it also works in Debug mode 100%. Could it be that the ultrafast mode is doing some crazy optimizations that the debugger doesn't like?
I've met this problem too with libx264-120.
libx264-120 was built on MinGW and configuration option like below.
$ ./configure --disable-cli --enable-shared --extra-ldflags=-Wl,--output-def=libx264-120.def --enable-debug --enable-win32thread
platform: X86
system: WINDOWS
cli: no
libx264: internal
shared: yes
static: no
asm: yes
interlaced: yes
avs: yes
lavf: no
ffms: no
gpac: no
gpl: yes
thread: win32
filters: crop select_every
debug: yes
gprof: no
strip: no
PIC: no
visualize: no
bit depth: 8
chroma format: all
$ make -j8
lib /def:libx264-120.def /machine:x86
#include "stdafx.h"
#include <iostream>
#include <cassert>
using namespace std;
#include <stdint.h>
extern "C"{
#include <x264.h>
}
int _tmain(int argc, _TCHAR* argv[])
{
int width(640);
int height(480);
int err(-1);
x264_param_t x264_param = {0};
//x264_param_default(&x264_param);
err =
x264_param_default_preset(&x264_param, "veryfast", "zerolatency");
assert(0==err);
x264_param.i_threads = 8;
x264_param.i_width = width;
x264_param.i_height = height;
x264_param.i_fps_num = 60;//fps;
x264_param.i_fps_den = 1;
// Intra refres:
x264_param.i_keyint_max = 60;//fps;
x264_param.b_intra_refresh = 1;
//Rate control:
x264_param.rc.i_rc_method = X264_RC_CRF;
x264_param.rc.f_rf_constant = 25;
x264_param.rc.f_rf_constant_max = 35;
//For streaming:
x264_param.b_repeat_headers = 1;
x264_param.b_annexb = 1;
err = x264_param_apply_profile(&x264_param, "baseline");
assert(0==err);
x264_t *x264_encoder = x264_encoder_open(&x264_param);
x264_encoder = x264_encoder;
x264_encoder_close( x264_encoder );
getchar();
return 0;
}
This program succeeds sometime. But will fail often on x264_encoder_open with the access violation.
The information for this is not existing on Google. And how to initialize x264_param_t and how to use x264_encoder_open are unclear.
It seems that behavior caused from x264's setting values, but I can't know these without reading some open source programs that using libx264.
And, this access violation seems doesn't occurs on FIRST TIME EXECUTION and on compilation with MinGW's gcc (e.g gcc -o test test.c -lx264;./test)
Since this behavior, I think that libx264 doing some strange processes of resources in DLL version of ilbx264 that was built on MinGW's gcc.
I had the same problem. The only way I was able to fix it was to build the x264 dll without the asm option (ie. specify --disable-asm)

problem with va_arg()

I want to wirte a function with variable arguments in this way:
static void configElement(U32 localFaultId,
char* name,
U32 report,
U32 localId,
U32 detectTime,
U32 ceaseTime,...)
{
U32 i = 0;
U32 tmpNo = 0;
va_list ap;
if (nofFaults >= MAX_NOF_LOCAL_FAULTS)
{
//something here
return;
}
else
{
faultList[nofFaults].ceaseTime = ceaseTime;
va_start(ap, ceaseTime);
tmpNo = va_arg(ap, U32);
while ((tmpNo!= END_MARK) && (i < MAX_NOF_DEPEND))
{
faultList[nofFaults].dependList[i++].faultNo = tmpNo;
}
faultList[nofFaults].dependList[i].faultNo = END_MARK;
/* Finish by increment nofFaults parameter */
va_end(ap);
nofFaults++;
}
}
However, I got the error msg when compiling this code:
fault_manager.cc:3344: error: expected primary-expression before ',' token
fault_manager.cc:3387: error: expected primary-expression before 'U32'
fault_manager.cc:3387: error: expected `)' before 'U32'
fault_manager.cc:3387: error: expected `)' before ';' token
fault_manager.cc:3387: error: expected `)' before ';' token
I have no idea what is going wrong here. My platform is Windows, and I'm using cygwin+Eclipse(CDT). The version of gcc is 4.1.1.
Any idea will be appreciated much!
It looks like the compiler does not know what U32 is. Did you include all necessary headers?