FreeRTOS and C-Trust Secure Boot - iar

I am trying the incorporate FreeRTOS onto an IAR application that uses IAR’s C-Trust Secure Boot Function. I am using a STM32 Nucleo L552ZE-Q board which has an M33 processor but as far as I know the C-Trust secure boot does not use the Trust Zone secure function.
I have a basic FreeRTOS project for the L5 board which works correctly, when I try and port this over to an L5 project which has the C-Trust secure boot function added it seems to get stuck in the vPortYield function in port.c.
The code for this function is;
void vPortYield( void ) /* PRIVILEGED_FUNCTION */
{
/* Set a PendSV to request a context switch. */
portNVIC_INT_CTRL_REG = portNVIC_PENDSVSET_BIT;
/* Barriers are normally not required but do ensure the code is
* completely within the specified behaviour for the architecture. */
__asm volatile ( "dsb" ::: "memory" );
__asm volatile ( "isb" );
}
The Disassembly window looks like this;
[![enter image description here][1]][1]
Which is the same for the working and non-working projects.
The Secure Boot RAM memory map is;
[![enter image description here][2]][2]
The Secure Boot Flash Memory map is;
[![enter image description here][3]][3]
Does anyone have a suggestion as to what is going wrong and how I might go about fixing it please.
Many thanks
Stephen
[1]: https://i.stack.imgur.com/yACgy.png
[2]: https://i.stack.imgur.com/Re1Gc.png
[3]: https://i.stack.imgur.com/sxCHi.png

Related

How could just loading a dll lead to 100 CPU load in my main application?

I have a perfectly working program which connects to a video camera (an IDS uEye camera) and continuously grabs frames from it and displays them.
However, when loading a specific dll before connecting to the camera, the program runs with 100% CPU load. If I load the dll after connecting to the camera, the program runs fine.
int main()
{
INT nRet = IS_NO_SUCCESS;
// init camera (open next available camera)
m_hCam = (HIDS)0;
// (A) Uncomment this for 100% CPU load:
// HMODULE handle = LoadLibrary(L"myInnocentDll.dll");
// This is the call to the 3rdparty camera vendor's library:
nRet = is_InitCamera(&m_hCam, 0);
// (B) Uncomment this instead of (A) and the CPU load won't change
// HMODULE handle = LoadLibrary(L"myInnocentDll.dll");
if (nRet == IS_SUCCESS)
{
/*
* Please note: I have removed all lines which are not necessary for the exploit.
* Therefore this is NOT a full example of how to properly initialize an IDS camera!
*/
is_GetSensorInfo(m_hCam, &m_sInfo);
GetMaxImageSize(m_hCam, &m_s32ImageWidth, &m_s32ImageHeight);
m_nColorMode = IS_CM_BGR8_PACKED;// IS_CM_BGRA8_PACKED;
m_nBitsPerPixel = 24; // 32;
nRet |= is_SetColorMode(m_hCam, m_nColorMode);
// allocate image memory.
if (is_AllocImageMem(m_hCam, m_s32ImageWidth, m_s32ImageHeight, m_nBitsPerPixel, &m_pcImageMemory, &m_lMemoryId) != IS_SUCCESS)
{
return 1;
}
else
{
is_SetImageMem(m_hCam, m_pcImageMemory, m_lMemoryId);
}
}
else
{
return 1;
}
std::thread([&]() {
while (true) {
is_FreezeVideo(m_hCam, IS_WAIT);
/*
* Usually, the image memory would now be grabbed via is_GetImageMem().
* but as it is not needed for the exploit, I removed it as well
*/
}
}).detach();
cv::waitKey(0);
}
Independently of the actually used camera driver, in what way could loading a dll change the performance of it, occupying 100% of all available CPU cores? When using the Visual Studio Diagnostic Tools, the excess CPU time is attributed to "[External Call] SwitchToThread" and not to the myInnocentDll.
Loading just the dll without the camera initialization does not result in 100% CPU load.
I was first thinking of some static initializers in the myInnocentDll.dll configuring some threading behavior, but I did not find anything pointing in this direction. For which aspects should I look for in the code of myInnocentDll.dll?
After a lot of digging I found the answer and it is both frustratingly simple and frustrating by itself:
It is Microsoft's poor support of OpenMP. When I disabled OpenMP in my project, the camera driver runs just fine.
The reason seems to be that the Microsoft compiler uses OpenMP with busy waiting and there is also the possibility to manually configure OMP_WAIT_POLICY, but as I was not depending on OpenMP anyways, disabling was the easiest solution for me.
https://developercommunity.visualstudio.com/content/problem/589564/how-to-control-omp-wait-policy-for-openmp.html
https://support.microsoft.com/en-us/help/2689322/redistributable-package-fix-high-cpu-usage-when-you-run-a-visual-c-201
I still don't understand why the CPU only went up high when using the camera and not when running the rest of my solution, even though the camera library is pre-built and my disabling/enabling of OpenMP compilation cannot have any effect on it. And I also don't understand why they bothered to make a hotfix for VS2010 but have no real fix as of VS2019, which I am using. But the problem is averted.
You can disable CPU idle state in the IDS camera manager and then the minimum CPU load in the windows energy plans is set to 100%
I think this is worth mentioning here, even you solved your problem already.

OpenVINO GPU performance optimization

I'm trying to speed up the inference on a people counter application, in order to use the GPU I've set the inference engine configuration setting as described:
device_name = "GPU"
ie.SetConfig({ {PluginConfigParams::KEY_CONFIG_FILE, "./cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml"} }, device_name);
and loading the network on the inference engine I've set the target device like described below:
CNNNetwork net = netReader.getNetwork();
TargetDevice t_device = InferenceEngine::TargetDevice::eGPU;
network.setTargetDevice(t_device);
const std::map<std::string, std::string> dyn_config = { { PluginConfigParams::KEY_DYN_BATCH_ENABLED, PluginConfigParams::YES } };
ie_.LoadNetwork(network,device_name, dyn_config);
but the inference engine use the CPU yet, and this slow down the inference time. There is a way to use the Intel GPU at maximum power to do inference on a particular network? I'm using the person-detection-retail-0013 model.
Thank's.
Have you meant person-detection-retail-0013? Because I haven't found pedestrian-detection-retail-013 in open_model_zoo repo.
This might be expected that you see a slowdown while using GPU. The network, you tested, has the following layers as part of the network topology: PriorBox, DetectionOutput . Those layers are executed on CPU as documentation says: https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_CL_DNN.html
I have a guess that this may be the reason of the slowdown.
But to be 100% percent sure I would suggest to run benchmark_app tool to do bench-marking of the model. This tool can print detailed performance information about each layer. It should help to shed light what is the real root cause of the slowdown. More information about benchmark_app can be found here: https://docs.openvinotoolkit.org/latest/_inference_engine_samples_benchmark_app_README.html
PS: Just a piece of advice regarding usage of IE API. network.setTargetDevice(t_device); - setTargetDevice is a deprecated method. It is enough to set a device using LoadNetwork like in your example: ie_.LoadNetwork(network,device_name, dyn_config);
Hope it will help.

How to enable Long Range for BLE Mesh in Zephyr OS

I’m working on Bluetooth mesh network solution and I have requirement to increase range.
I’m using nrf52840 DK and nrf52840 dongles, nrf5SDKforMeshv310.
In Nordic Devzone
I found solution which, enables BLE long range mode in NRF SDK for mesh.
NOTE! I'm aware solution doesn't comply with Bluetooth Mesh standard.
Following changes were applied to nrf5 SDK for Mesh v310:
In advertise.c, set_default_broadcast_configuration() changed radio_mode to use RADIO_MODE_NRF_62K5BIT instead of RADIO_MODE_BLE_1MBIT.
In scanner.c, scanner_config_reset() changed scanner_config_radio_mode_set() to use RADIO_MODE_NRF_62K5BIT instead of RADIO_MODE_BLE_1MBIT.
In radio_config.c, radio_config_config() added the following code at the end:
if (p_config->radio_mode==RADIO_MODE_NRF_62K5BIT ){
NRF_RADIO->PCNF0 |=(
((RADIO_PCNF0_PLEN_LongRange << RADIO_PCNF0_PLEN_Pos) & RADIO_PCNF0_PLEN_Msk) |
((2 << RADIO_PCNF0_CILEN_Pos) & RADIO_PCNF0_CILEN_Msk) |
((3 << RADIO_PCNF0_TERMLEN_Pos) & RADIO_PCNF0_TERMLEN_Msk) );
}
In broadcast.c, time_required_to_send_us() added:
if (radio_mode == RADIO_MODE_NRF_62K5BIT)
{
packet_length_in_bytes +=RADIO_PREAMBLE_LENGTH_LR_EXTRA_BYTES;
}
Defined RADIO_PREAMBLE_LENGTH_LR_EXTRA_BYTES = 9 in same file
Changed 5th element in radio_mode_to_us_per_byte[] from 128 to 64.
NOTE. that the long-range mode is mislabeled. It is called RADIO_MODE_NRF_62K5BIT in the header file but corresponds to the 125kbps BLE long range mode instead.
Unfortunately for relays I’m pushed to use Zephyr to support friend feature and Zephyr is not relaying messages after applying changes to NRF SDK. I did brief investigation on Zephyr side and found that code bits for BLE long range described above for NRF SDK are in place and can be enabled using following Kconfig settings:
CONFIG_BT_AUTO_PHY_UPDATE=y
CONFIG_BT_PHY_UPDATE=y
CONFIG_BT_HCI_MESH_EXT=y
CONFIG_BT_CTLR_PHY=y
CONFIG_BT_CTLR_ADV_EXT=y
CONFIG_BT_CTLR_ADVANCED_FEATURES=y
CONFIG_BT_CTLR_PHY_2M=y
CONFIG_BT_CTLR_PHY_CODED=y
But still I don’t see that messages are being relayed on Zephyr side (using J-Link RTT Viewer). I also tried to increase log level for Bluetooth and Mesh to DEBUG, but I don’t see any signs that messages are malformed or rejected.
May be someone has ideas in which direction I should dig in on Zephyr side?

ParaView: Live point cloud visualization plugin

I am writing a ParaView version 5.1.2 plugin in C++ to visualize point cloud data produced by a LiDAR sensor. I noticed that Velodyne has an open source ParaView custom application to visualize their LiDAR data called Veloview. I tweaked some of their code to start but I am stuck now.
So far I wrote a reader that takes a pcap file and renders a point cloud that can be played back frame by frame. I also wrote a ParaView source that listens on a port and captures udp packets and after they are captured uses the reader to split them into frames and visualize the PC.
Now I would like to take live udp packets and render the point cloud in real time as each frame is completed.
I am having trouble accomplishing this because of the ParaView plugin structure. Currently, my reader displays a frame when the method RequestData is called. My method looks something like this.
int RequestData(vtkInformation *request, vtkInformationVector **inputVector, vtkInformationVector *outputVector){
vtkPolyData* output = vtkPolyData::GetData(outputVector);
vtkInformation* info = outputVector->GetInformationObject(0);
int timestep = 0;
if (info->Has(vtkStreamingDemandDrivenPipeline::UPDATE_TIME_STEP()))
{
double timeRequest = info->Get(vtkStreamingDemandDrivenPipeline::UPDATE_TIME_STEP());
int length = info->Length(vtkStreamingDemandDrivenPipeline::TIME_STEPS());
timestep = static_cast<int>(floor(timeRequest + 0.5));
}
this->Open();
// GetFrame returns a vtkSmartPointer<vtkPolyData> that is the frame
output->ShallowCopy(this->GetFrame(timestep));
this->Close();
return 1;
}
The RequestData method is called every time the timestep is updated in the ParaView gui. Then the frame from that timestep is copied into the outputVector.
I am not sure how to implement this with live data because in that circumstance the RequestData method is not called because no timesteps are requested. I saw there is a way to keep RequestData executing by using CONTINUE_EXECUTING() in this way.
request->Set(vtkStreamingDemandDrivenPipeline::CONTINUE_EXECUTING(), 1);
But I do not know if that is supposed to be used to visualize live data.
For now I am interested in simply reading live packets and throwing them away as soon as their frame is rendered. Does anyone know how I can achieve this?
In the code of VeloView (which basically is a bundled ParaView+LidarPlugin), the timesteps of ParaView is changed by the main code, not the Lidar Plugin.
We advice you to start from VeloView code, which is much closer to your goal.
If you really want to start from scratch within ParaView, you need to increment this requested timestep yourself.
Newest version of VeloView (unreleased) uses the same mechanism as ParaView “LiveSource” plugin (available in 5.6+), where the plugin tells ParaView to set a QtTimer that will automatically increment the available and requested timesteps.
request->Set(vtkStreamingDemandDrivenPipeline::CONTINUE_EXECUTING(), 1); relates to another mechanism that will run request Data multiple time, but won’t take care of updating the requested timestep.
Best,
Bastien Jacquet
VeloView project leader

I2C Does not acknowledge Slave Address

I am Using STM32F767ZI MCU on Nucleo 144 board, C++ as programming language and IAR embedded workbench IDE.
TXIS bit flag Status is never getting Set (1) even when the I2C is enabled and there is no data in TXDR register.
What I have also noticed that although both master and slave have same Slave address in the relevant registers but NO ADDCODE occurs. Although as evident from code I am using the Polling method. ADDCODE register should have same address as slave address which is not happening as well.
Hardware settings have been verified as correct.
Trying to perform Loopback test on same MCU using I2C1 as master transmitter and I2C2 as slave receiver. Code is getting stuck at part as below:
while(!(IsTXISset())) // Code is getting stuck here
{
}
Where IsTXISset() is as below:
bool I2CInterface_c::IsTXISset(void) const
{
bool flag{false};
volatile uint32_t isrreg = I2C_ISR.Get();
isrreg &= TXISFLAG; //TXISFLAG = 0x02 i.e the only bit position of the TXIS is set as high
if(isrreg == TXISFLAG)
{
flag = true;
}
return flag;
}
Can Anybody assist with that please?
Finally, managed to resolve the issue after noticing that GPIO pins were not setting up properly in Alternative function open drain mode.
Second problem identified after slave started acknowledging the address match no data transfer was taking place which got resolved by writing a routine to clear the ADDR bit is ISR register of I2C.