I am trying to run any of the services from gate web service, in neon 2.3.
Even Annie that runs so well in gate doesn't run, or better, it stay for indefinite time processing, a thing that should take no more than a couple of seconds. I run wizard, set input directory, leave file pattern as default and set a folder and name for the output ontology, shouldn't it be enough? Shouldn't i get something, even an error?
I think its the location who's giving me problems.
http://safekeeper1.dcs.shef.ac.uk/neon/services/sardine
http://safekeeper1.dcs.shef.ac.uk/neon/services/sprat
http://safekeeper1.dcs.shef.ac.uk/neon/services/annie
http://safekeeper1.dcs.shef.ac.uk/neon/services/termraider
How can i confirm it? Can i run it offline?
Can anyone give me a hand?
Also, i've seen sprat running on gate, on "SPRAT: a tool for automatic semantic pattern-based ontology population"
Can anyone teach me how, and with what versions?
Thx,
Celso Costa
Related
We just got a new Server at work that will be used only for running SAS code, and I've been asked to run some tests and make sure that it's performing better than our other servers. I'm not an expert at this so I want to make sure I avoid making naive mistakes that don't properly measure the performance of the server. My header looks like this:
options fullstimer;
%LET BenchStartTime = %sysfunc(datetime(),22.);
Which I use as a check for the "real time" report in the log. I have a vague understanding of the difference between "user cpu time" and "system cpu time", but if anyone wants to offer up additional information on that, that would be helpful.
Anyway, the main point of this post is that I want to know if there are any standard benchmark tests that I should be using in order to see if this new server is better than the old ones. Currently I'm using something I found online which is just appending a bunch of copies of sashelp.class (but I think this might be a bad idea because pulling from the C: and loading it into a different drive might be the same across all servers, right? If the C: is the slowest drive, doesn't that become the bottleneck?), and I'm also using code that basically generates a bunch of random data of a fixed size and comparing runtimes. Is this the correct approach? Is there something else I should be doing? How many times should I be running these benchmark tests to make sure that it isn't a fluke?
Thanks for your help!
I would test by doing the things that you normally do. If you run larger merges, then you're basically talking I/O; so just make a very large dataset, write it out, read it in, etc., and perform the same test on the other machine. Perform each test a few times each in a fresh SAS session.
Further, it sounds like you need to make sure the new server can handle multiple concurrent sessions. You can simulate this in part by submitting many connections from one computer by using SAS/CONNECT; that allows you to start multiple concurrent sessions. Then do so, perhaps I would create a script that starts a local SAS session, signs on to the server and rsubmits a job of a normal difficulty that takes maybe 5 to 10 minutes, and then run that script 20 times concurrently (you can script this or just double click a .bat file 20 times). See how it handles it as compared to the other server.
On SAS 9.2 and later you can use the %SASHOME%/SASFoundation/9.x/sasiotest.exe utility. You will find some further Guidance in Support note 51659.
Edit:
A similiar utility can be downloaded for Unix Plattform, details in Support note 51660.
I hope you are doing good and i really appreciate your help here for my query.
We have our system T3000 written in C++ (http://www.temcocontrols.com/ftp/software/9TstatSoftware.zip and codes are available here https://github.com/temcocontrols/T3000_Building_Automation_System).
I am trying to integrate 'BIRT reporting tool' in my C++ application. I want to create report based on the data available in our T3000 system. I think BIRT is embeddable (??). We don't need to compile and change the project, just need to be able to call it from T3000.exe mainly.
My thinking is we may put one menu label in existing T3000 and try to display report in user single click.
Can you please help me to solve my issue with 'BIRT' ? I really appreciate your answer.
Regards
Raju
Well, the answer depends on what your definition of "embeddable" is.
BIRT is written in pure Java.
I could think of 3 different ways:
Of course it is possible to integrate Java code into an existing C/C++ program (see Embed Java into a C++ application?).
You could just use the BIRT runtime engine and generate the report as PDF or HTML from the command line (that means, basically you call the java executable from your program with several arguments). See Birt - How to run report engine on the console? and http://eclipser-blog.blogspot.de/2008/02/automatic-generation-of-birt-reports.html for more information.
You could run a Java web server like Tomcat in a second process and then start your report by calling a http URL (e.g. you could use the included Servlet example). See http://www.eclipse.org/birt/documentation/integrating/viewer-usage.php
Similar to 3. (see below)
Some notes:
The second option is slow, due to the Java and BIRT engine startup overhead (this may take several seconds). With the first and third option, the startup overhead is or can be minimized to only once (and for each report).
For the second and third option it may be necessary to modify the existing code of the example programs to suit your needs.
The first option is probably the best for an industry-quality solution, but it is also the most difficult to develop.
Anyway, Java skills are necessary IMHO.
If you plan to run this on a SOC instead of a PC, take performance into account.
Is a Java-based solution well-suited for this kind of hardware? BIRT needs quite a lot of RAM and CPU (for a SOC). Hardware like the Raspi 3 should handle this quite easily, I reckon.
I integrated the BIRT runtime into an existing Python application (all this running on an application server) in a fourth way: I wrote a listener program that listens on a TCP socket for BIRT tasks. It uses a pool of worker processes (written in Java) which in turn use the BIRT report engine to generate the output. The client program (here: written in Python) opens a TCP connection to the listener and uses this socket to tell it which report to generate (including report parameters and destination file name). The listener program then in turn chooses a worker process for the task and gives the task to the worker process.
So, basically, this fourth option is similar to the third one, with two differences:
The communication is socket-based (instead of http), allowing bi-di communication.
The architecture is multi-processes instead of multi-threading. We choose this because very large reports could cause out-of-memory errors for otherwise unrelated reports that just happen to run at the same time. It's the same basic architecture Oracle chose for their reports server.
However, developing the programs took months.
HVB: I have to give you more than a simple thanks for the explanation above, this info will save us time I am sure. Raju will be sharing our experience after we get into the project a little deeper so others can benefit.
I would love to hear ideas on how to best move code from development server to production server.
A list of gotcha's, don't do this list would be helpful.
Any tools to help automate the steps of.
Make backups of existing code, given these list of files
Record the Deployment of these files from dev to production
Allow easier rollback if deployment or app fails in any way...
I have never worked at a company that had a deployment process, other than a very manual, ftp files from dev to production.
What have you done in your companies, departments, etc?
Thank you...
Yes, I am a coldfusion programmer, but files are files, and this should be language agnostic question.
OK, I'll bite. There's the technology aspect of this problem, which other answers have already covered. But the real issue is a process problem. Where the real focus should be ensuring a meaningful software development life cycle (SDLC) - planning, development, validation, and deployment. I'll cover each in turn. What you want is a repeatable activity at each phase.
Planning
Articulating and recording what's to be delivered. Often tickets or user stories are enough. Sometimes you do more, like a written requirements document, that a customer signs off on, that's translated into various artifacts such as written use cases - ultimately what you want though is something recorded in an electronic system where you can associate changes to code with it. Which leads me to...
Development
Remember that electronic system? Good. Now when you make changes to code (you're committing to source control right?) you associate those change with something in this electronic system - typically tickets. I like Trac, but have also heard good things about Atlassian's suite. This gives you traceability. So you can assert what's been done and how. Then you can use this system and source control to create a build - all the bits needed for whatever's changed - and tag that build in source control - that's your list of what's changed. Even better, have a build contain everything, so that it's standalone entity that can easily be deployed on it's own. The build is then delivered for...
Validation
Perhaps the most important step that many shops ignore - at their own peril. Defects found in production are exponentially more expensive to fix then when they're discovered earlier in the process. And validation is often the only step where this occurs in many shops - so make sure yours does it.
This should not be done by the programmer! That's like the fox watching the hen house. And whoever is doing is should be following some sort of plan. We use Test Link. This means each build is validated the same way, so you can identify regression bugs. And, this build should be deployed in the same way as you would into production.
If all goes well (we usually need a minimum of 3 builds) the build is validated. And this goes to...
Deployment
This should be a non-event, because you're taking a validated build following the same steps as you did in testing. Could be first it hits a staging server, where there's an automated copying process, but the point being is that is shouldn't be an issue at this point, because you validated with the same process.
Conclusion
In terms of knowing what's where, what you really want is a logical way to group changes together. This is where the idea of a build comes in. It's really the unit that should segue between steps in the SDLC. If you already have that, then the ability to understand the state of a given system becomes trivial.
Check out Ant or Maven - these are build and deployment tools used in the Java world which can help you copy / ftp files, backup and even check out code from SVN.
You can automate your deployment steps using these tools, for example Ant will allow you declare a set of tasks as part of your deployment. So you could, for example:
Check out a revision using SVNAnt or similar to a directory
Copy (and perhaps zip first) these files to a backup directory
FTP all the files to your web server(s)
Create a report to email to the team illustrating the deployment
Really you can do almost anything you wish to put time into using Ant. Maven is a little more strucutred (and newer) and you can see a discussion of the differences here.
Hope that helps!
In a nutshell...
You should start with some source control solution - probably Subversion or Git. Once that's in place you can create a script that generates a clean build of your source code and deploys it to your production server(s).
You could do this with a simple batch script or use something like Ant for more control. Here is a simple example of a batch file using Subversion:
svn copy svn://path/to/your/project/trunk -r HEAD svn://path/to/your/project/tags/%version%
svn checkout svn://path/to/your/project/trunk -r HEAD //path/to/target/directory
Ant makes it easy to do things like automatically run unit tests and sync directories. For example:
<sync todir="//path/to/target/directory" includeEmptyDirs="true" overwrite="true">
<fileset dir="${basedir}">
<exclude name="**/*.svn"/>
<exclude name="**/test/"/>
</fileset>
</sync>
This is really just a starting point. A next step might be a continuous integration solution like Hudson. I would also recommend reading "Pragmatic Project Automation: How to Build, Deploy, and Monitor Java Applications".
One ColdFusion specific gotcha is to make sure you clear the Application scope when required (to update any singleton components). A common approach here is to use a URL parameter that causes onRequestStart() to call onApplicationStart(). You may also have to clear the trusted cache.
We use a system called AnthillPro: http://www.anthillpro.com
It's commercial software, but it allows us to completely automate our deployment process across multiple servers and operating systems (We currently use it for both ColdFusion and Java, but it can be used for most languages. It has a ton of 3rd party integrations:
http://www.anthillpro.com/html/products/anthillpro/tool-integrations.html
We are using Excel to convert SpreatSheetML to XLS in an ASP.NET webservice. Moreover, if the user checks the right checkboxes, we spawn a thread that uses Excel to print the spreadsheet.
Recently, we have deployed the app in a new environment, and then we started having problems: the first time someone tries to print, Excel seems to hang on the server - i.e. the call to the PrintOut method on the workbook never returns.
But if we log in to the server as the application pool identity and open Excel, send something to the printer, and close it again, printing will work from then on!
I suspect that Excel is showing an invisible dialog - the symptoms are the same as we had earlier, a time when Excel seemed to stall on a "cannot use object linking and embedding"-dialog that appeared when Excel opened.
I know that using server-side Office automation is bad, but this is a legacy app that is very hard to change, so please don't just advise me to re-design our solution.
Has anyone had any experiences with this kind of behavior?
Well, noone seems to have had this problem.
The really weird thing is that my night jobs (ordinary .NET .exe) are perfectly capable of printing - it's only my web services that have this problem.
So I solved the problem by doing what I should have done long ago: I made a simple Windows service with Topshelf, that responds to some MSMQ messages and does the printing, and then my web services can order print-outs via a message queue.
Much nicer in every way!
I've had no end of problems (poor performance, hanging processes, crashing processes etc) using Microsoft Excel, Word and PowerPoint through interop in a web service to print Office documents to PDF format. I too have faced problems that I suspect are because of invisible dialog boxes (maybe a file is corrupt, read-only recommended has been set, file is password protected, or whatever).
I know there are tools available that don't use Office, but they are very expensive. My solution was to switch to automating OpenOffice. OpenOffice seems to be much more stable, and I've left hanging processes and the like behind.
So, while I suppose I am saying "don't automate Microsoft Office", I'm not suggesting that you abandon automation altogether; just that I've had much more success automating OpenOffice than Microsoft Office.
SpreadsheetGear for .NET can read xls or xlsx workbooks and can print to the default printer without displaying any dialog boxes (see the WorkbookView.Print() method).
You can download an evaluation here.
Disclaimer: I own SpreadsheetGear LLC
Like many people, I have seen this sort of behavior. It is caused by using the Office APIs in a server, especially a multithreaded ASP.NET application.
However, you've said you don't want to know about not shooting yourself in the foot, so there's little more to say. You seem to be trapped by the consequences of earlier foolishness.
OK, stop me if you've heard this one:
A man asks a question on StackOverflow. He says, "SO, bad stuff happens when I automate an Office application from inside a service". So, John Saunders says, "So, don't automate the Office application from inside a service. Automate it from inside a desktop application, as Microsoft intended to be done."
When a request comes in for something that requires Excel, you should create a process running a Windows Forms application. The application may have to start with no window, or you may need to start it in the context of a Remote Desktop connection. In any case, the task to be performed may be passed as a command line parameter, or the program can host a WCF service to have commands sent to it.
This program can call Excel just like Excel expects to be called. It can probably even handle more than one command to Excel (one at a time). However, if it hangs, the process can be killed and another one started.
I've never tried this, but it sounds like it would work better than trying to get Office Automation to do something it was not designed to do.
We develop custom survey web sites and I am looking for a way to automate the pattern testing of these sites. Surveys often contain many complex rules and branches which are triggered on how items are responded too. All surveys are rigorously tested before being released to clients. This testing results in a lot of manual work. I would like to learn of some options I could use to automate these tests by responding to questions and verifying the results in the database. The survey sites are produced by an engine which creates and writes asp pages and receives the responses to process into a database. So the only way I can determine to test the site is to interact with the web pages themselves. I guess in a way I need to build some type of bot; I really don't know much about the design behind them.
Could someone please provide some suggestions on how to achieve this? Thank you for your time.
Brett
Check out selenium: http://selenium.openqa.org/
Also, check out the answers to this other question: https://stackoverflow.com/questions/484/how-do-you-test-layout-design-across-multiple-browsersoss
You could also check out WatiN.
Sounds like your engine could generate a test script using something like Test::WWW::Mechanize
Usual test methodologies applies; white box and black box.
White box testing for you may mean instrumenting your application to be able to make it go into a particular state, then you can predict the the result you expect.
Black box may mean that you hit a page, then consider of the possible outcomes valid. Repeat and rinse till you get sufficient coverage.
Another thing we use is monitoring statistics for our service. Did we get the expected number of hits on this page. We routinely run a/b tests, and I have run a/b tests against refactored code to verify that nothing changed before rolling things out.
/Allan
I can think of a couple of good web application testing suites that should get the job done - one free/open source and one commercial:
Selenium (open source/cross platform)
TestComplete (commercial/Windows-based)
Both will let you create test suites by verifying database records based on interactions with the web app.
The fact that you're Windows/ASP based might mean that TestComplete will get you up and running faster, as it's native to Windows and .NET. You can download a free trial to see if it'll work for you before making the investment.
Check out the unit testing framework 'lime' that comes with the Symfony framework. http://www.symfony-project.org/book/1_0/15-Unit-and-Functional-Testing. You didn't mention you language, lime is php.
I would suggest the mechanize gem,available for ruby . It's pretty intuitive to use .
I use the QEngine(commerical) for the same purpose. I need to add a data and check the same in the UI. I write one script which does this and call that in a loop. the data can be passed via either csv or excel.
check that www.qengine.com , you can try Watir also.
My proposal is QA Agent (http://qaagent.com). It seems this is a new approach because you do not need to install anything. Just develop your web tests in the browser based ide. By the way you can develop your tests using jQuery and java script. Really cool!