I would like to use multi-process, process-1 will only update LPM table. Process-2 will process pktmbuf with LPM lookup
Details are subject to following conditions
Process-1, is part of application in way is non-dpdk based which only has librte_lpm and bare minimum linked to make it work.
Process-2, is part of application which is full fledged dpdk based which has all the dpdk libraries linked to it.
Also note always updates in process-1 to lpm happens less frequently.
Thanks,
Regards,
Venu
If both the process invokes rte_eal_init as primary/secondary model it will work. Process-1 (primary) need not invoke any other API apart from rte_lpm for add/delete. While Process-2 can invoke rte_eth_rx_burst and do rte_lpm_lookup.
Related
PROBLEM: In a Sharpoint + BPM solution, running Windows 2012 with IIS 8, after the application pool is recycled, the first call of any type of process fails, all subsequent calls are successful.
Starting from an ascx embedded on an Sharpoint solution, one ascx per process type, we click a button that originates a server.transfer to a new page that creates a new instance of the pretended process type, if this suceeds, from here we response.redirect, with query string rewrite, to the new process instance just created. In the case of error (1st process after recycling) this last step does not ocurr.
As the page that instantiates de process is dynamic, depending on the type of process choosed, it could not be pré-loaded before start the solution, if we force the pré-instantiation of these pages at start-up, we could end up with a dummy process, one of each kind, at every new application pool recycle (once a day).
QUESTION: How can I locate [MyApp].XMLSerializers.dll and unload it in order to validate my theory that it’s absence is responsible for the 1st processes call failure?
SOLUTIONS ATTEMPTED:
Optimization of the Application Pool and Site configuration
No Results
Search for DLL bind errors
Using FUSLOGVW, after recycling application pool it seems that, in a first try, the [myApp].XMLSerializers.DLL, among others, is missing. As this is a time consuming step and the error does not happen when tracing (even only for event viewer) is enabled I supose that the on the fly generation of the DLL with all serializable types could be related to this issue.
Findings:
Afects also processes that don’t consume web services
When trace is on there is no error
Afects all environments
Any advice greatly appreciated
Many thanks, LTS
My application launches hundreds of child processes sent to SGE. Few of them take a lot of memory due to which jobs are being failed.
i need some way to monitor the memory usage of clients from main process and relaunch/resubmit them to grid with higher memory requirement in case of such job failures.
i have heard something about missing heartbeat algo for such requirements but I am not much aware of them.
Can experts here please help me in finding out a nice solution for this issue? my application is a c++ application on Linux/Solaris.
Thanks
Ruchi
A solution I have used before is to have a script that captures the output from the qstat-command (using rsh in my case). I filter on my jobs and store the information I need (in my case it was CPU) in a continuously updated list. When a job aborted or was killed, it was easy to go back and look at the CPU usage. It is not 100% real-time, but good enough for me.
My language of choice was Python, as it contains easy-to-use libraries for catching output and logging in to remote machines. However, it should be easy to implement something like capturing rsh-output in C++. You can for example use popen() to pipe the output into your application. I hope this helps.
I'm new to Windows API programming. I am aware that there are ways to check if a process is already running (via enumeration). However, I was wondering if there was a way to listen for when a process starts and ends (for example, notepad.exe) and then perform some action when the starting or ending of that process has been detected. I assume that one could run a continuous enumeration and check loop for every marginal unit of time, but I was wondering if there was a cleaner solution.
Use WMI, Win32_ProcessStartTrace and Win32_ProcessStopTrace classes. Sample C# code is here.
You'll need to write the equivalent C++ code. Which, erm, isn't quite that compact. It's mostly boilerplate, the survival guide is available here.
If you can run code in kernel, check Detecting Windows NT/2K process execution.
Hans Passant has probably given you the best answer, but... It is slow and fairly heavy-weight to write in C or C++.
On versions of Windows less than or equal to Vista, you can get 95ish% coverage with a Windows WH_CBT hook, which can be set with SetWindowsHookEx.
There are a few problems:
This misses some service starts/stops which you can mitigate by keeping a list of running procs and occasionally scanning the list for changes. You do not have to keep procs in this list that have explorer.exe as a parent/grandparent process. Christian Steiber's proc handle idea is good for managing the removal of procs from the table.
It misses things executed directly by the kernel. This can be mitigated the same way as #1.
There are misbehaved apps that do not follow the hook system rules which can cause your app to miss notifications. Again, this can be mitigated by keeping a process table.
The positives are it is pretty lightweight and easy to write.
For Windows 7 and up, look at SetWinEventHook. I have not written the code to cover Win7 so I have no comments.
Process handles are actually objects that you can "Wait" for, with something like "WaitForMultipleObjects".
While it doesn't send a notification of some sort, you can do this as part of your event loop by using the MsgWaitForMultipleObjects() version of the call to combine it with your message processing.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion
\Image File Execution Options
You can place a registry key here with your process name then add a REG_SZ named 'Debugger' and your listner application name to relay the process start notification.
Unfortunately there is no such zero-overhead aproach to recieving process exit that i know of.
I created a logging module which logs messages to a mysql db, the current code is located here:
https://github.com/amiadogroup/mod_log_chat_mysql5/blob/master/src/mod_log_chat_mysql5.erl
The Problem with the current code is, that sometimes the connection gets closed and as a result, the module doesn't work anymore.
As you see in the code, I store the DBRef in an ets table, which is not really the good way to go.
I asked the erlang mailinglist about this and they suggested me to do the DB Connection as an own child process of the module. This would enable the module to gracefully restart the connection upon closing of the connection.
Now my question is: how can I implement this child process with gen_server and/or gen_mod?
Do I need to create two files or can I do it within the same file?
Is there any example somewhere on how I could achieve that?
Edit: As you can see in the linked github repo, I updated the code and it works now, weeha!
Looking at the mod_Archive code helped me a lot, although I didn't decide to upgrade my ejabberd version.
I ran into another, but related problem now. In the code you see that I do an initial query with "SET NAMES UTF8" to prevent garbling of messages. It seems that this isn't done again if the gen_server does a reconnect. Is there any hook I can call upon reconnect so that the UTF8 query is done everytime?
Edit#2:
Now I switched to Emysql (https://github.com/Eonblast/Emysql) and it works out of the box by specifying the encoding directly on connect.
Code is on github.
Thanks for your help,
Michael
I suggest you look into general Erlang/OTP principles (gen_server, supervisor, etc).
ejabberd is relying on this standard Erlang architecture pattern.
Regarding your comment on database, ejabberd has its own way on managing database and passing queries to MySQL for example. You should as well look into it.
In your source code you are only applying the gen_mod behaviour, if you do wish to have a gen_server you can do it in the same module, if you define the gen_server behaviour has well.
A good example would be the ejabberd module mod_archive, which implements both behaviours.
Edit: I never really worked "directly" with mysql on erlang. But through the ejabberd methods I find it pretty "easy"(you will have to make a few setup, but rather easy). You have the method
ejabberd_odbc:sql_query_t(Query)
And has an example you can find it on the module mod_archive_odbc.
To use that method(and the last module) I haved downloaded the mysql native driver and put the beams created from the driver in ejabberd ebin dir (you can put it anywhere has long is on the erlang path).
A a soft link to the ejabberd ebin is my favorite:
ln -s <diryouhavethedriver>/ebin/*.beam /usr/lib/ejabberd/ebin/
and do a few configurations on you ejabberd.cfg. This process is described on this page on process one. Notice that the full steps are to make mysql the full database of ejabberd. You may not want that, so you must jump a few steps.
Hope this help.
Our app is ran from SU or normal user. We have a library we have connected to our project. In that library there is a function we want to call. We have a folder called notRestricted in the directory where we run application from. We have created a new thread. We want to limit access of the thread to file system. What we want to do is simple - call that function but limit its access to write only to that folder (we prefer to let it read from anywhere app can read from).
Update:
So I see that there is no way to disable only one thread from all FS but one folder...
I read your propositions dear SO users and posted some kind of analog to this question here so in there thay gave us a link to sandbox with not a bad api, but I do not really know if it would work on anething but GentOS (but any way such script looks quite intresting in case of using Boost.Process command line to run it and than run desired ex-thread (which migrated to seprate application=)).
There isn't really any way you can prevent a single thread, because its in the same process space as you are, except for hacking methods like function hooking to detect any kind of file system access.
Perhaps you might like to rethink how you're implementing your application - having native untrusted code run as su isn't exactly a good idea. Perhaps use another process and communicate via. RPC, or use a interpreted language that you can check against at run time.
In my opinion, the best strategy would be:
Don't run this code in a different thread, but run it in a different process.
When you create this process (after the fork but before any call to execve), use chroot to change the root of the filesystem.
This will give you some good isolation... However doing so will make your code require root... Don't run the child process as root since root can trivially work around this.
Inject a replacement for open(2) that checks the arguments and returns -EACCES as appropriate.
This doesn't sound like the right thing to do. If you think about it, what you are trying to prevent is a problem well known to the computer games industry. The most common approach to deal with this problem is simply encoding or encrypting the data you don't want others to have access to, in such a way that only you know how to read/understand it.