SAS EG mute/comment/put on hold/hide a branch - sas

In SAS EG is there a way to put on hold the execution of a branch (the task would be grayed out for example) so that it's not executed when I execute a parent process ?
If not would you advise a good practice to put some tasks aside without losing the process tree structure ?

It depends on the version of your SAS EG. Check if you have this "add condition" when you click on any of the tasks. Once a condition has been added, it shows as a flag. See the screenshot below.

You can add a condition as shown in the other answer, and have it depend on a prompt value (if you sometimes might want to run this), or on a macro value that you just define in your program by hand (if you currently never would want to run it).
You won't be able to keep your links, though, without some gymnastics that don't really make sense. Using the conditional execution means not going down the rest of the branch.
I'd also suggest that if you have extra programs that you want to keep-but-not-run, you move them to another process flow, unless you have a very good reason for keeping them in that particular process flow. I usually have a few process flows:
In Development: where I have programs that are in development and I don't want to run along with the whole process flow (maybe I'm not sure where they go yet in order, or am not sure if I will include or not)
Other Programs: where I put programs that I might run on an ad-hoc basis but not regularly.
Deprecated Programs: where I put programs that are "old" and not used anymore, but I want to keep around for reference or just to remember what I've done.
Finally, if you use version control, you can always get back to the program you had before; so if you do use version control properly you don't need to keep programs around just in case, if you're fairly sure they're not needed anymore.

Related

Can argv be changed at runtime (not by the app itself)

I wonder can input parameters of main() be changed at runtime. In other words, should we protect the app from possible TOCTTOU attack when handling data in argv? Currently, I don't know any way to change data that was passed in argv, but I'm not sure that such ways don't exist.
UPD: I forgot to point out that I'm curious about changing argv from outside the program since argv is accepted from outside the program.
I'd say there are two main options based on your threat model here:
You do not trust the environment and assume that other privileged processes on your machine are able to alter the contents of memory of your program while it is running. If so, nothing is safe, the program could be altered to do literally anything. In such case, you can't even trust an integer comparison.
You trust the environment in which your program is running. In this case your program is the only owner of its data, and as long as you don't explicitly decide to alter argv or any other piece of data, you can rely on it.
In the first case, it doesn't matter if you guard against potential argv modifications, since you are not trusting the execution environment, so even those guards could be fooled. In the second case, you trust the execution environment, so you don't need to guard against the problem in the first place.
In both the above cases, the answer is: no, you shouldn't protect the app from a possible TOCTTOU attack when handling data in argv.
TOCTTOU kind of problems usually arise from external untrusted data, that can be modified by somebody else and should not be trusted by definition. A simple example is the existence of a file: you cannot rely on it, as other users or programs on the machine could delete or move it, the only way you can make sure the file can be used is by trying to open it. In the case of argv, the data is not external and is owned by the process itself, so the problem really does not apply.
In general, the set of strings that are passed to main() in the argv array are set inside the program user space, mostly in a fixed place at the top of the program stack.
The reason for such a fixed place, is that some programs modify this area to allow for a privileged program (e.g. the ps command) to gather and show you different command arguments, as the program evolves at runtime. This is used in programs like sendmail(8) or in user program's threads, to show you which thread is doing what job in your program.
This is a feature that is not standard, it is used differently by the different operating systems (I have described you the BSD way) As far as I know, linux also exhibits this behaviour and Solaris.
In general, this makes the arguments to main something that, belonging to the user process space, has to be modified with care (using some operating system specific contract), as it is normally subject to rigid conventions. The ps(1) command digs in the user space of the process it is going to show in order to show the long listing showing the command parameters. The different operating systems document (probably you can get this from the linker standard script used in your system the exact format or how the stack is intialized by the exec(2) familiy of calls -- the exec(2) manual page should be of help also)
I don't exactly know if this is what you expect, or if you just want to see if you can modify the arguments.... as something belonging to the user space of the proces, they are modifiable most probably, but I cannot guess any reasons to do that, apart of those described in this answer.
By the way, the FreeBSD manual page for the execlp(2) system call shows the following excerpt:
The type of the argv and envp parameters to execle(), exect(), execv(),
execvp(), and execvP() is a historical accident and no sane
implementation should modify the provided strings. The bogus parameter
types trigger false positives from const correctness analyzers. On
FreeBSD, the __DECONST() macro may be used to work around this
limitation.
This states clearly that you cannot modify them (in FreeBSD at least). I assume the ps(8) command will handle the extra work of verifying those parameters in a proper way in order to never incurr in a security issue bug (well, this can be tested, but I leave it as an exercise for the interested people)
EDIT
If you look at /usr/include/sys/exec.h (line 43) in FreeBSD, you will find that there's a struct ps_strings located in the top of the user stack, that is used by ps(1) command to find and locate the the process environment and argv strings. While you can edit this to change the information a program gives to ps(1), you have a setproctitle(3) library function (again, all of this is FreeBSDish, you'll have to dig to get the way linux, or other, solves this problem)
I've tried this approach, but it doesn't work. Today there's a library function call to get this approach, but the top of the stack is actually filled with the data mentioned above (I assume for compatibility reasons)

cflock on application variables that rarely change

We currently have a series of variables that are loaded into the application scope that rarely change.
By rarely change, I mean that they are strings like phone numbers, or simple text values that appear on a website and may change once a week or once a month.
Since we are reading these variables and because they rarely change, is there any requirement to encapsulate these inside a cflock ?
I think it would be alot of coding overhead to wrap these variables inside a cflock as the template may contain upwards of 20 instances of these static variables.
Any advice on this greatly appreciated
Personally I would say you do not need to. These variables are essentially constants.
However, you need to assess this yourself. You need to answer the question, 'what would be the ramifications of these variables being read with stale data?'
This means, if as in your example the wrong phone number is used on a request is this a disaster? If that is a problem that you can live with then you can make no changes. If however there are variables that are used in calculations or ones that will cause unacceptable problems if they are stale, then you will need to lock access to these. In this way you can focus your efforts on where you need to and minimise the additional work.
As an aside if you do need to lock any variables then a good pattern to use is to store them inside a CFC instance that is stored in application scope. This way you can handle all the locking in the CFC and your calling code remains simple.
Depending on the version of ACF, Railo, etc... you are using I would suggest that data like this might be better stored in the cache and not in the application scope. The cache can have more persistences through restarts, etc... as well and could be a more efficient way to go.
Take a look at the cacheput, cacheget, cachedelete, etc... functions in the documentation. I believe this was functionality was added in CF9 and Railo 3.2.
Taking it one step further you could simply cache the entire output that uses them for X time as well, so that each time that part is loaded it only has to load one thing from the cache instead of the twenty or so times you mention.
If you are going to store them in the application scope then you only really need to have the cflock around the part of the code that updates them and lock it at the application level. That way anything wanting to read them will have to wait for it to finish updating them before it can read them anyway as the update thread will have a lock on the application scope.

c++ program to find a file currently open in gvim?

I want to rename the folder e.g "mv -f old_proj_name new_proj_name".
But, since the file is opened in gvim editor it is not allowing renaming operation to be performed on the folder.
The file is not moved to new folder name.
Manually I have used unlocker software to check whether the file is locked by other process.
fopen() does not show file is locked, when the file is opened by gvim editor.
I tried with opendir() API as well but didn't helped.
Now i want the lock checking functionality to be implemented in my code, so that before doing rename operation i should able to know whether i can do it successfully or not.
Please guide me.
Regards,
Amol
before doing rename operation i should able to know whether i can do it successfully or not.
This is a fallacy. You can only know whether you could perform the operation successfully at the time of the check. To know whether you can do it now, you need to check for it now. But when you actually get around to performing it, that "now" will turn to "back then". To have a reliable indication, you need to check again.
Don't you think it will get tiresome really fast?
So there are two ways of dealing with this.
First, you can hope (but never know) that nothing important happens between the check and the actual operation.
Second, you may skip the check altogether and just attempt the operation. If it fails, then you can't do it. There, you have killed two birds with one stone: you have checked whether an operation is possible, and performed it in the case it is indeed possible.
Update
If your data is organised in such a way that you have to perform several operations that may fail, and data consistency depends on all these operations succeeding or failing at once, then there's an inherent problem. You can check for some known failure conditions, but (a) you can never check for all possible failure conditions, and (b) any check is valid just for the moment it's performed. So any such check will not be fully reliable. You may be able to prevent some failures but not others. An adequate solution to this would be data storage with proper rollback facility built in, i.e. a database.
Hope it helps.

C++ Benchmark tool

I have some application, which makes database requests. I guess it doesn't actually matter, what kind of the database I am using, but let's say it's a simple SQLite-driven database.
Now, this application runs as a service and does some amount of requests per minute (this number might actually be huge).
I'm willing to benchmark the queries to retrieve their number, maximal / minimal / average running time for some period and I wish to design my own tool for this (obviously, there are some, but I need my own for some appropriate reasons :).
So - could you advice an approach for this task?
I guess there are several possible cases:
1) I have access to the application source code. Here, obviously, I want to make some sort of cross-application integration, probably using pipes. Could you advice something about how this should be done and (if there is one) any other possible solution?
2) I don't have sources. So, is this even possible to perform some neat injection from my application to benchmark the other one? I hope there is a way, maybe hacky, whatever.
Thanks a lot.
See C++ Code Profiler for a range of profilers.
Or C++ Logging and performance tuning library for rolling your own simple version
My answer is valid just for the case 1).
In my experience profiling it is a fun a difficult task. Using professional tools can be effective but it can take a lot of time to find the right one and learn how to use it properly. I usually start in a very simple way. I have prepared two very simple classes. The first one ProfileHelper the class populate the start time in the constructor and the end time in the destructor. The second class ProfileHelperStatistic is a container with extra statistical capability (a std::multimap + few methods to return average, standard deviation and other funny stuff).
The ProfilerHelper has an reference to the container and before exit the destructor push the data in the container.You can declare the ProfileHelperStatistic in the main and if you create on the stack ProfilerHelper at the beginning of a specific function the job is done. The constructor of the ProfileHelper will store the starting time and the destructor will push the result on the ProfileHelperStatistic.
It is fairly easy to implement and with minor modification can be implemented as cross-platform. The time to create and destroy the object are not recorded, so you will not polluted the result. Calculating the final statistic can be expensive, so I suggest you to run it once at the end.
You can also customize the information that you are going to store in ProfileHelperStatistic adding extra information (like timestamp or memory usage for example).
The implementation is fairly easy, two class that are not bigger than 50 lines each. Just two hints:
1) catch all in the destructor!
2) consider to use collection that take constant time to insert if you are going to store a lot of data.
This is a simple tool and it can help you profiling your application in a very effective way. My suggestion is to start with few macro functions (5-7 logical block) and then increase the granularity. Remember the 80-20 rule: 20% of the source code use 80% of the time.
Last note about database: database tunes the performance dynamically, if you run a query several time at the end the query will be quicker than at the beginning (Oracle does, I guess other database as well). In other word, if you test heavily and artificially the application focusing on just few specific queries you can get too optimistic results.
I guess it doesn't actually matter,
what kind of the database I am using,
but let's say it's a simple
SQLite-driven database.
It's very important what kind of database you use, because the database-manager might have integrated monitoring.
I could speak only about IBM DB/2, but I beliefe that IBM DB/2 is not the only dbm with integrated monitoring tools.
Here for example an short overview what you could monitor in IBM DB/2:
statements (all executed statements, execution count, prepare-time, cpu-time, count of reads/writes: tablerows, bufferpool, logical, physical)
tables (count of reads / writes)
bufferpools (logical and physical reads/writes for data and index, read/write times)
active connections (running statements, count of reads/writes, times)
locks (all locks and type)
and many more
Monitor-data could be accessed via SQL or API from own software, like for example DB2 Monitor does.
Under Unix, you might want to use gprof and its graphical front-end, kprof. Compile your app with the -pg flag (I assume you're using g++) and run it through gprof and observe the results.
Note, however, that this type of profiling will measure the overall performance of an application, not just SQL queries. If it's the performance of queries you want to measure, you should use special tools that are designed for your DBMS - for example, MySQL has a builtin query profiler (for SQLite, see this question: Is there a tool to profile sqlite queries? )
There is a (linux) solution you might find interesting since it could be used on both cases.
It's the LD_PRELOAD trick. It's an environment variable that let's you specify a shared library to be loaded right before your program is executed. The symbols load from this library will override any other available on the system.
The basic idea is to this custom library as a wrapper around the original functions.
There is a bunch of resources available that explain how to use this trick: 1 , 2, 3
Here, obviously, I want to make some sort of cross-application integration, probably using pipes.
I don't think that's obvious at all.
If you have access to the application, I'd suggest dumping all the necessary information to a log file and process that log file later on.
If you want to be able to activate and deactivate this behavior on-the-fly, without re-starting the service, you could use a logging library that supports enabling/disabling log channels on-the-fly.
Then you'd only need to send a message to the service by whatever means (socket connection, ...) to enable/disable logging.
If you don't have access to the application, then I think the best way would be what MacGucky suggested: let the profiling/monitoring tools of the DBMS do it. E.g. MS-SQL has a nice profiler that can capture requests to the server, including all kinds of useful data (CPU time for each request, IO time, wait time etc.).
And if it's really SQLite (plus you don't have access to the source) then your chances are rather low. If the program in question uses SQLite as a DLL, then you could substitute your own version of SQLite, modified to write the necessary log files.
Use the apache jmeter.
To test performances of your sql queries under high load

How can I log which thread called which function from which class and at what time throughout my whole project?

I am working on a fairly large project that runs on embedded systems. I would like to add the capability of logging which thread called which function from which class and at what time. E.g., here's what a typical line of the log file would look like:
Time - Thread Name - Function Name - Class Name
I know that I can do this by using the _penter hook function, which would execute at the beginning of every function called within my project (Source: http://msdn.microsoft.com/en-us/library/c63a9b7h%28VS.80%29.aspx). I could then find a library that would help me find the function, class, and thread from which _penter was called. However, I cannot use this solution since it is VC++ specific.
Is there a different way of doing this that would be supported by non-VC++ implementations? I am using the ARM/Thumb C/C++ Compiler, RVCT3.1. Additionally, is there a better way of tracking problems that may arise from multithreading?
Thank you,
Borys
I've worked with a system that had similar requirements (ARM embedded device). We had to build much of it from scratch, but we used some CodeWarrior stuff to do it, and then the map file for the function name lookup.
With CodeWarrior, you can get some code inserted into the start and end of each function, and using that, you can track when you enter each function, and when you switch threads. We used assembly, and you might have to as well, but it's easier than you think. One of your registers will be your return value, which is a hex value. If you compile with a map file, you can then use that hex value to look up the (mangled) name of that function. You can find the class name in the function name.
But, basically, get yourself a stream to somewhere (ideally to a desktop), and yell to the stream:
Entered Function #####
Left Function #####
Switched to Thread #
(PS - Actual encoding should be more like 1 21361987236, 2 1238721312, since you don't actually want to send characters)
If you're only ever processing one thread at a time, this should give you an accurate record of where you went, in the order you went there. Attach clock tick info for function profiling, add a message for allocations (and deallocations) and you get memory tracking.
If you're actually running multiple threads, it could get substantially harder, or be more of the same - I don't know. I'd put timing information on everything, and then have a separate stream for each thread. Although you might just be able to detect which processor you're running on, and report that, for which thread.... I don't, however, know if any of that will work.
Still, the basic idea was: Report on each step (function entry/exit, thread switching, and allocation), and then re-assemble the information you care about on the desktop side, where you have processing to spare.
gcc has PRETTY_FUNCTION define. With regard to thread, you can always call gettid or similar.
I've written a few log systems that just increment a thread # and store in in thread-local-data. That helps with giving thread of log statements. (time is easy to print out)
For tracing all function calls automatically, I'm not so sure. If it's just a few, you can easily write an object & macro that logs entry/exit using the __FUNCNAME__ #define (or something similar for your compiler).