Filename-parsing logic in PySCIPOpt - linear-programming

What are the allowed extensions for the parameter filename in pyscipopt.Model.writeProblem(filename)?
It works if filename='a'or filename='a.mps',
but if e.g. filename='a.problem', then it fails with the error code:
Exception: SCIP: a required plugin was not found !

The file extension is used to indicate the file format to write out the problem. The standard formats are CIP, MPS, and LP. The full list can be found here and more formats can be supported by adding custom file readers.
There is no PROBLEM file format and that is why you got this error. You might have added a custom file reader to support PROBLEM files and that is why SCIP's error message is a bit cryptic here.

Related

TypeError: Unable to modify PDF file, make sure that output file target is available and that it is not protected

I'm trying to fill pdf form using HummusJS but it throws me an error TypeError: Unable to modify PDF file, make sure that output file target is available and that it is not protected while running pdf generation in AWS lambda function but its working fine on my local machine and log aren't generated. Is there any debugging option available??. I wasted 3 days for solving this issue.
Any help will highly be appreciated.
Thanks
I experienced this issue when the PDF file I was reading had a size of 0. Somewhere during my tests I suspect that the PDF I was working with got overwritten with empty data and it turns out that this is what was creating the permission error.
I was working from this sample.

How to read Bazels binary build event protocol file?

I want to implement fetching of compiler warnings with Bazel (Bazel based build). I know that there are files which can already be used for this. These files are located at:
$PROJECT_ROOT/bazel-out/_tmp/action_outs/
and are named stderr-XY.
Bazel has the ability to save all build events in a designated file. Note that currently (Bazel 0.14) there are 3 supported formats for that designated file, and those are: text file, JSON file and binary file. This question is related only to binary file.
If I have understood Google's protocol buffers correctly, the workflow for them to be implemented and to work is:
You specify how you want the information you're serializing to be structured by defining protocol buffer message types in .proto files.
Once you've defined your messages, you run the protocol buffer compiler (protoc) for your application's language on your .proto file to generate data access classes.
Include generated files in your project and use generated class in your code. By use it is meant to populate, serialize and retrieve protocol buffer messages (i.e. for C++ which is the programming language that I use it is possible to use SerializeToOstream and ParseFromIstream methods for such tasks)
To conclude this question:
As it is stated here:
"Have Bazel serialize the protocol buffer messages to a file by specifying the option --build_event_binary_file=/path/to/file. The file will contain serialized protocol buffer messages with each message being length delimited."
I do not see the way to avoid the fact that the developer who wants to use Bazel's functionality to write build events in a binary file, needs to know the "format" or even more concise to say Class architecture to read that binary file. Am I missing something here? Can all of this be done and how?
Also, I have tried to use protoc --decode_raw < bazelbepbinary.bin and it says:
Failed to parse input.
All of this was done on Ubuntu 16.04 and at the moment I'm not sure what is the GCC version but I will add GCC version to the question when I have to access to that information.
My side question is: is it possible to capture only those build events which reflect build warnings (without using some kind of filter e.g grep on generated file?) I have read the documentation and used:
bazel help build --long | grep "relevant_build_event_protocol_keywords"
and was unable to find anything like that in the API.

Cannot export Pandas dataframe to specified file path in Python for csv and excel both

I have written a program which exports my Pandas DataFrame to a csv as well as an excel file. However, the problem I am facing is that, randomly, the export function to both the file formats does not work, resulting in me seeing an error stating "No such File Path or Directory".
My code is as follows:
frame3.to_csv('C:/Users/Downloads/ABC_test.csv',index=False)
writer = pd.ExcelWriter('C:/Users/Downloads/ABCD.xlsx', engine='openpyxl')
frame3.to_excel(writer, sheet_name='Sheet1')
writer.save()
The major issue is that this code works sometimes and sometimes it does not! Going by what others have posted here, I tried to add the output directory by the use of
pth1 = os.path.join(r'C:/Users/Downloads/FinalProgram/', output_filename)
frame3.to_csv(pth1)
Sadly, this has no effect on this stubborn error. Would appreciate any help / insights possible on the matter.
Forgot to update - I figured a way around this particular problem:
Simply set the working directory for the program to be the output directory (as depicted by the command below), before the calling the 'to_csv' function.
os.chdir('F:/Codelah/')
On a side note, this was an issue I primarily faced on Windows OS - Ubuntu worked like a charm and did not require this workaround!
Hope this helps

ERROR: User does not have appropriate authorization level for file WORK.SASMACR.CATALOG

Anyone seen this error before?
"ERROR: User does not have appropriate authorization level for file WORK.SASMACR.CATALOG"
Occasionally, our batch jobs stops with this error. I don't understand how a file in work should not be writeable?
Thanks,
Stig
After I disabled antivirus (Sophos) online scanning for the work folder and *.sas7bcat files the error seem to have gone away.
According to this http://support.sas.com/documentation/cdl/en/mcrolref/61885/HTML/default/viewer.htm#a001328775.htm , macro catalogs are usually read-only, except while macros are being compiled or updated. Perhaps this answers the "why this file in work folder is not writable" part of your question.

generate C/C++ command line argument parsing code from XML (or similar)

Is there a tool that generates C/C++ source code from XML (or something similar) to create command line argument parsing functionality?
Now a longer explanation of the question:
I have up til now used gengetopt for command line argument parsing. It is a nice tool that generates C source code from its own configuration format (a text file). For instance the gengetopt configuration line
option "max-threads" m "max number of threads" int default="1" optional
among other things generates a variable
int max_threads_arg;
that I later can use.
But gengetopt doesn't provide me with this functionality:
A way to generate Unix man pages from the gengetopt configuration format
A way to generate DocBook or HTML documentation from the gengetopt configuration format
A way to reuse C/C++ source code and to reuse gengetopt configuration lines when I have multiple programs that share some common command line options
Of course gengetopt can provide me with a documentation text by running
command --help
but I am searching for marked up documentation (e.g. HTML, DocBook, Unix man pages).
Do you know if there is any C/C++ command line argument tool/library with a liberal open source license that would suite my needs?
I guess that such a tool would use XML to specify the command line arguments. That would make it easy to generate documentation in different formats (e.g. man pages). The XML file should only be needed at build time to generate the C/C++ source code.
I know it is possible to use some other command line argument parsing library to read a configuration file in XML at runtime but I am looking for a tool that generate C/C++ source code from XML (or something similar) at build time.
Update 1
I would like to do as much as possible of the computations at compile time and as less as possible at run time. So I would like to avoid libraries that give you a map of the command line options, like for instance boost::program_options::variables_map ( tutorial ).
I other words, I prefer args_info.iterations_arg to vm["iterations"].as<int>()
User tsug303 suggested the library TCLAP. It looks quite nice. It would fit my needs to divide the options into groups so that I could reuse code when multiple programs share some common options. Although it doesn't generate out the source code from a configuration file format in XML, I almost marked that answer as the accepted answer.
But none of the suggested libraries fullfilled all of my requirements so I started thinking about writing my own library. A sketch: A new tool that would take as input a custom XML format and that would generate both C++ code and an XML schema. Some other C++ code is generated from the XML schema with the tool CodeSynthesis XSD. The two chunks of C++ code are combined into a library. One extra benefit is that we get an XML Schema for the command line options and that we get a way to serialize all of them into a binary format (in CDR format generated from CodeSynthesis XSD). I will see if I get the time to write such a library. Better of course is to find a libraray that has already been implemented.
Today I read about user Nore's suggested alternative. It looks promising and I will be eager to try it out when the planned C++ code generation has been implemented. The suggestion from Nore looks to be the closest thing to what I have been looking for.
Maybe this TCLAP library would fit your needs ?
May I suggest you look at this project. It is something I am currently working on: A XSD Schema to describe command line arguments in XML. I made XSLT transformations to create bash and Python code, XUL frontend interface and HTML documentation.
Unfortunately, I do not generate C/C++ code yet (it is planed).
Edit: a first working version of the C parser is now available. Hope it helps
I will add yet another project called protoargs. It generates C++ argument parser code out of protobuf proto file, using cxxopts.
Unfortunately it does not satisfy all author needs. No documentation generated. no compile time computation. However someone may find it useful.
UPD: As mentioned in comments, I must specify that this is my own project