I'm currently building a robot which has some sensors attached to it. The control unit on the robot is an ARM Cortex-M3, all sensors are attached to it and it is connected via Ethernet to the "ground station".
Now I want to read and write settings on the robot via the ground station. Therefore I thought about implementing a "virtual register" on the robot that can be manipulated by the ground station.
It could be made up of structs and look like this:
// accelerometer register
struct accel_reg {
// accelerations
int32_t accelX;
int32_t accelY;
int32_t accelZ;
};
// infrared distance sensor register
struct ir_reg {
uint16_t dist; // distance
};
// robot's register table
struct {
uint8_t status; // current state
uint32_t faultFlags; // some fault flags
accel_reg accelerometer; // accelerometer register
ir_reg ir_sensors[4]; // 4 IR sensors connected
} robot;
// usage example:
robot.accelerometer.accelX = -981;
robot.ir_sensors[1].dist = 1024;
On the robot the registers will be constantly filled with new values and configuration settings are set by the ground station and applied by the robot.
The ground station and the robot will be written in C++ so they both can use the same struct datatype.
The question I have now is how to encapsulate the read/write operations in a protocol without writing tons of meta data?
Let's say I want to read the register robot.ir_sensors[2].dist. How would I address this register in my protocol?
I already thought about transmitting a relative offset in bytes (i.e the relative position in memory inside the struct) but I think memory alignment and padding may cause problems, especially because the ground station runs on a x86_64 architecture and the robot runs on a 32-bit ARM processor.
Thanks for any hints! :)
I'm also going to suggest Google Protocol Buffers.
In the simplest case, you could implement one message RobotState like this:
message RobotState {
optional int32_t status = 1;
optional int32_t distance = 2;
optional int32_t accelX = 3;
...
}
Then when the robot receives the message, it will take new values from any optional field that is present. It will then reply with a message containing the current value of all fields.
This way it is quite easy to implement field update using the "merge message" functionality of most protobuf implementations. Also you can keep it very simple at start because you only have one message type, but if you need to expand later you can add submessages.
It is true that protobuf does not support int8_t or int16_t. Just use int32_t instead.
I think the Google protocol buffers are an excellent session/presentation layer tool to use. Actually, Google protocol buffers do not support the syntax I am thinking of. So, I will change this part of my answer to recommend XSD by Code Synthesis. Although it is primarily used with XML, it supports different presentation layers such as XDR and may be more efficient than protocol buffers with large amounts of optional data. The generate code is also very nice to work with. XSD is free to use with OpenSource software and even commercial use with limited message structures.
I don't believe you want to read/write register sets at random. You can prefix a message with an enum that denotes a message such as, IR update, distance, accel, etc. These are register groups. Then the robot responds with the register set. All the registers you've given so far are sensors. The write ones must be motor control?
You want to think about what control you want to perform and the type of telemetry you would like to receive. Then come up with a message structure and bundle the information together. You could use sequence diagrams, and remote procedure API's like SOA/SOAP, RPC, REST, etc. I don't mean these RPC frameworks directly, but the concepts such as request/response and perhaps message that are just sent periodically (telemetry) without specific requests. So there would be a telemetry request from the ground station with some sort of interval and then the robot would respond periodically with unsolicited data. You always need a message id (enum above), unless your protocol is going to be stateful, which I would discourage for robustness reasons.
You haven't described how the control system might work or if you wish to do this remotely. Describing that may lead to more ideas on the protocol. I believe we are talking about layers 5,6,7 of OSI. Have fun.
Related
I have a question regarding buffering in between blocks in GNU Radio. I know that each block in GNU (including custom blocks) have buffers to store items that are going to be sent or received items. In my project, there is a certain sequence I have to maintain to synchronize events between blocks. I am using GNU radio on the Xilinx ZC706 FPGA platform with the FMCOMMS5.
In the GNU radio companion I created a custom block that controls a GPIO Output port on the board. In addition, I have an independent source block that is feeding information into the FMCOMMS GNU block. The sequence I am trying to maintain is that, in GNU radio, I first send data to the FMCOMMS block, second I want to make sure that the data got consumed by the FMCOMMS block (essentially by checking buffer), then finally I want to control the GPIO output.
From my observations, the source block buffer doesn’t seem to send the items until it’s full. This will cause a major issue in my project because this means that the GPIO data will be sent before or in parallel with sending the items to the other GNU blocks. That’s because I’m setting the GPIO value through direct access to its address in the ‘work’ function of my custom block.
I tried to use pc_output_buffers_full() in the ‘work’ function of my custom source in order to monitor the buffer, but I’m always getting 0.00. I’m not sure if it’s supposed to be used in custom blocks or if the ‘buffer’ in this case is something different from where the output items are stored. Here's a small code snippet which shows the problem:
char level_count = 0, level_val = 1;
vector<float> buff (1, 0.0000);
for(int i=0; i< noutput_items; i++)
{
if(level_count < 20 && i< noutput_items)
{
out[i] = gr_complex((float)level_val,0);
level_count++;
}
else if(i<noutput_items)
{
level_count = 0;
level_val ^=1;
out[i] = gr_complex((float)level_val,0);
}
buff = pc_output_buffers_full();
for (int n = 0; n < buff.size(); n++)
cout << fixed << setw(5) << setprecision(2) << setfill('0') << buff[n] << " ";
cout << "\n";
}
Is there a way to monitor the buffer so that I can determine when my first part of data bits have been sent? Or is there a way to make sure that the each single output item is being sent like a continuous stream to the next block(s)?
GNU Radio Companion version: 3.7.8
OS: Linaro 14.04 image running on the FPGA
Or is there a way to make sure that the each single output item is being sent like a continuous stream to the next block(s)?
Nope, that's not how GNU Radio works (at all!):
A while back I wrote an article that explains how GNU Radio deals with buffers, and what these actually are. While the in-memory architecture of GNU Radio buffers might be of lesser interest to you, let me quickly summarize the dynamics of it:
The buffers that (general_)work functions are called with behave for all that's practical like linearly addressable ring buffers. You get a random number of samples at once (restrictable to minimum numbers, multiples of numbers), and all that you not consume will be handed to you the next time work is called.
These buffers hence keep track of how much you've consumed, and thus, how much free space is in a buffer.
The input buffer a block sees is actually the output buffer of the "upstream" block in the flow graph.
GNU Radio's computation is backpressure-controlled: Any block's work method will immediately be called in an endless loop given that:
There's enough input for the block to do work,
There's enough output buffer space to write to.
Therefore, as soon as one block finishes its work call, the upstream block is informed that there's new free output space, thus typically leading to it running
That leads to high parallelity, since even adjacent blocks can run simultaneously without conflicting
This architecture favors large chunks of input items, especially for blocks that take a relative long time to computer: while the block is still working, its input buffer is already being filled with chunks of samples; when it's finished, chances are it's immediately called again with all the available input buffer being already filled with new samples.
This architecture is asynchronous: even if two blocks are "parallel" in your flow graph, there's no defined temporal relation between the numbers of items they produce.
I'm not even convinced switching GPIOs at times based on the speed computation in this completely non-deterministic timing data flow graph model is a good idea to start with. Maybe you'd rather want to calculate "timestamps" at which GPIOs should be switched, and send (timestamp, gpio state) command tuples to some entity in your FPGA that keeps absolute time? On the scale of radio propagation and high-rate signal processing, CPU timing is really inaccurate, and you should use the fact that you have an FPGA to actually implement deterministic timing, and use the software running on the CPU (i.e. GNU Radio) to determine when that should happen.
Is there a way to monitor the buffer so that I can determine when my first part of data bits have been sent?
Other than that, a method to asynchronously tell another another block that, yes, you've processed N samples, would be either to have a single block that just observes the outputs of both blocks that you want to synchronize and consumes an identical number of samples from both inputs, or to implement something using message passing. Again, my suspicion is that this is not a solution to your actual problem.
Using National Instruments' DAQmx via C++, I would like to present a list of possible physical trigger inputs available on the system to the user.
I can set a task to start on an external trigger by calling something like
char* trigger_source = "/Dev1/PFI0";
DAQmxCfgDigEdgeStartTrig(taskAO, trigger_source, DAQmx_Val_Rising);
Is there a way to get a list of the valid values for trigger_source? I have found DAQmxGetSystemInfoAttribute(DAQmx_Sys_DevNames, , ) to get a list of the devices available in the system, and I know that DAQmxGetDevDILines() and similar functions can give me lists of some of the types of ports on a device. However, I have found nothing that returns the PFIs.
If a list cannot be obtained, is there a sane way to test whether a given guessing string like "/Dev%d/PFI%d" is a valid trigger source?
There are two ways:
Dynamically on-demand
Guess-check-cache-query
Dynamic
You can build this list, but not with a single call into the driver. Use a combination of these properties:
DAQmxGetDevTerminals(const char device[], char *data, uInt32 bufferSize) which returns the PFI lines as well as internal terminals. It doesn't return any of the I/O terminals (like ai0).
DAQmxGetDevAIPhysicalChans(const char device[], char *data, uInt32 bufferSize) which returns the channel terminals for the AI subsystem; there are similar calls for the other DAQ subsystems.
DAQmxGetDevAnlgTrigSupported(const char device[], bool32 *data) which returns whether the device supports triggering from analog signals.
DAQmxGetDevDigTrigSupported(const char device[], bool32 *data) which returns whether the device supports triggering from digital signals.
DAQmxGetDevAITrigUsage(const char device[], int32 *data) which returns what trigger types the AI subsystem can use; there are similar calls for the other DAQ subsystems.
Cached
You could also create a dummy task and preview each terminal and trigger type combination.
You won't need to run the task, just "verify" it, which will prompt the driver to run its rules system on those settings and return an error if that configuration is not supported. If you cache these in memory or a file (or a DB or whatever), it might be easier to query that instead of the driver.
DAQmxTaskControl (TaskHandle taskHandle, int32 action) which moves the task in the DAQmx state model. Using DAQmx_Val_Task_Verify for the action parameter will verify that all task parameters are valid for the hardware.
I am working on a project in which I am controlling a quadcopter with an android phone. I have noticed that the using the DroneListener to listen for ATTITUDE_UPDATED messages is not updating the yaw nearly as fast enough as I need it to. I would like to get the quadcopter’s yaw updated frequently, in (more or less) real time, such as how the Mission Planner application does when you plug your quadcopter in via usb.
I have tried using msg_request_data_stream (that is essentially what Mission Planner uses, but in C#) to request various data streams from the drone, but it has not worked. My assumption is that such the response to this request (i.e., the stream) would be received by any registered MavlinkObservers, however the custom MavlinkObserver that I have added to my drone has not had its onMavlinkMessageReceived(…) method called a single time. It has not even received heartbeat messages, which should (based on my understanding) be coming reqularly and frequently.
Is there any help that you could offer? Is there any better way to get the quadcopter’s yaw than waiting for the DroneListener to be notified of events? I know that it should be possible with Mavlink (MissionPlanner does it), but I haven’t been having any success with Mavlink messages or with a MavlinkObserver.
Here is my code for requesting a data stream:
msg_request_data_stream message = new msg_request_data_stream();
//not sure how doubles cast to byte, so I cast it to an int first (the value is
//always going to be an int anyway, getValue just returns doubles for parameters by default)
message.target_system = (byte)((int)params.getParameter("SYSID_THISMAV").getValue());
message.req_message_rate = (short)1;
message.req_stream_id = (byte)0;
message.start_stop = (byte)1;
//wrap the message and send it!
ExperimentalApi.sendMavlinkMessage(this, new MavlinkMessageWrapper(message));
I need help making a gnuradio OOT module. I am actually trying to extending one code.
I am trying to make 2 TX 1Rx tagged stream block (OOT). For 1Tx 1Rx, it
is working fine. I am trying to extend it. Now the problem is, I am not
being able to configure the send() function. With this code, one transmitter transmits, but other does not work. The subdev specification and frequency and other parameters are allocated correctly. I checked.
If I try make test, it does not show any problem. I checked the every
port of my USRP X310, it is working fine.
Here's the code. I putting a short part which deals with the send and receive buffer.
void
usrp_echotimer_cc_impl::send()
{
// Data to USRP
num_tx_samps = d_tx_stream->send(d_in_send1, total_num_samps,
d_metadata_tx, total_num_samps/(float)d_samp_rate+d_timeout_tx);
num_tx_samps = d_tx_stream->send(d_in_send0, total_num_samps,
d_metadata_tx, total_num_samps/(float)d_samp_rate+d_timeout_tx);
}
int
usrp_echotimer_cc_impl::work (int noutput_items,
gr_vector_int &ninput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
gr_complex *in0 = (gr_complex *) input_items[0];
gr_complex *in1 = (gr_complex *) input_items[1];
gr_complex *out = (gr_complex *) output_items[0];
// Set output items on packet length
noutput_items = ninput_items[0]=ninput_items[1];
// Resize output buffer
if(d_out_buffer.size()!=noutput_items)
d_out_buffer.resize(noutput_items);
// Send thread
d_in_send0 = in0;
d_in_send1 = in1;
d_noutput_items_send = noutput_items;
d_thread_send =
gr::thread::thread(boost::bind(&usrp_echotimer_cc_impl::send, this));
// Receive thread
d_out_recv = &d_out_buffer[0];
d_noutput_items_recv = noutput_items;
d_thread_recv =
gr::thread::thread(boost::bind(&usrp_echotimer_cc_impl::receive, this));
My system config is X310, daughterboard SBX-120 and I am using UHD-3.9. I checked the subdev specification, gain and frequency assignment. Those
are fine.
For completeness:
This has been asked yesterday on the GNU Radio mailing list; Sanjoy has already gotten two responses:
Martin Braun wrote:
Sorry, Sanjoy,
we'll need some more information before we can give you better
feedback. It's not clear exactly what you're trying to achieve, and
what exactly is failing.
Perhaps this helps getting started:
http://gnuradio.org/redmine/projects/gnuradio/wiki/ReportingErrors
Cheers, Martin
And my answer is a bit longish, but it's available here. In excerpts:
Hi Sanjoy,
I am trying to make 2 TX 1Rx tagged stream block(OOT). For 1Tx 1Rx, it was working fine. I am trying to extend it. Now the problem is, I
am not being able to configure the send() function.
Is there a
particular reason you're creating your own block? Is there a feature
missing on the USRP sink and source that come with gr-uhd?
...
your send() function looks a bit strange; what you're doing is
transmitting two things on a single channel after each other. Also,
what you should be doing (from a programming style point of view) is
pass the buffers you want to send as references or something, but not
save them to class properties and then call send(). To send two
buffers to two different channels, you will need to use a vector
containing the two buffers -- have a look at rx_multi_samples; that of
course recv()s rather than sends(), but the semantics are the same.
...
noutput_items is given to you to let your work now how much it may produce, that's why it's a parameter.
...
There's no reason GNU Radio couldn't already call your work function again while usrp_echotimer_cc_impl::send() hasn't even started to transmit samples. Then, your d_send variables will be overwritten.
...
Since a single X310 is inherently coherent and has the same time, you can simply use a USRP sink and a source, use set_start_time(...) on both with the same time spec, and have your flow graph consume and produce samples in different threads, coherently.
I'm trying to create a standalone AGC using WebRtc library. (Input - wav file, output - wav file with adjusted gain). But at this time I have some problems with this issue.
I'm trying to use functions which are declared in gain_control.h file. When I'm using WebRtcAgc_Process(....) I obtain constant gain, which applies to whole signal, but not nonlinear gain which depends from input signal magnitude.
May be I should use another functions for my purpose? How can I implement AGC via WebRTC library?
The AGC's main purpose is to provide a recommended system mic volume which the user is expected to set through the OS. If you would like to apply a purely digital gain, you can configure it in one of two modes (from modules/audio_processing/include/audio_processing.h, but gain_control.h has analogous modes):
// Adaptive mode intended for situations in which an analog volume control
// is unavailable. It operates in a similar fashion to the adaptive analog
// mode, but with scaling instead applied in the digital domain. As with
// the analog mode, it additionally uses a digital compression stage.
kAdaptiveDigital,
// Fixed mode which enables only the digital compression stage also used by
// the two adaptive modes.
//
// It is distinguished from the adaptive modes by considering only a
// short time-window of the input signal. It applies a fixed gain through
// most of the input level range, and compresses (gradually reduces gain
// with increasing level) the input signal at higher levels. This mode is
// preferred on embedded devices where the capture signal level is
// predictable, so that a known gain can be applied.
kFixedDigital
You can set these through WebRtcAgc_Init(), though unless you need to avoid the overhead, I'd recommend just using the AudioProcessing class.
refer to http://osxr.org/android/source/external/webrtc/src/modules/audio_processing/agc/interface/gain_control.h#0133
The gain adjustments are done only during 0135 * active periods of
speech. The input speech length can be either 10ms or 0136 * 20ms and
the output is of the same length.
quick overview of webrtcage_process
int WebRtcAgc_Process(void* agcInst,
const WebRtc_Word16* inNear,
const WebRtc_Word16* inNear_H,
WebRtc_Word16 samples,
WebRtc_Word16* out,
WebRtc_Word16* out_H,
WebRtc_Word32 inMicLevel,
WebRtc_Word32* outMicLevel,
WebRtc_Word16 echo,
WebRtc_UWord8* saturationWarning);