Read data from Qlikview into sas - sas

I have a qlikview data file.
The file includes:
An XML formatted table header, after loading XML table header in sas system, I received some sasdatasets that describe about structure of qlikview data file, I made empty sasdataset that correspond with structure of qlickview data file.
The actual data like that: " à# à# à# # ÁZ¦EÀ ÁZ¦EÀÁZ¦EÀ ", but I don't know what is format data is stored in?
Now I need load actual data into this empty sasdataset, but I don't know how can I do. My leader suggested using ''READING BYNARY DATA". I tried read by using statement INFILE and INPUT with ENCODING, but my implements aren't successful because very hard determine Binary Informat.
Can somebody give me suggestion ? thank so much!

Related

How can i make power query read ".dss" files?

Im trying to make a dashboard on Power BI with the .dss files from simulations of HEC-HMS to show results of time series datas, but they are inside a ".dss" file and power query says that: "we don't recognize the format of the first file"
How can I open those ".dss" files inside the power query ?
see a pic:
enter image description here
Thanks! Waiting help.
This looks like what you might be looking for:
HEC-DSS File and HEC-DSSVue – Gridded Data:
Quote:
HEC-DSS, USACE Hydrologic Engineering Center Data Storage System, is a type of database system to store data primarily for hydrologic and hydraulic modeling (*.dss file). HEC-DSSVue is a tool to view, edit, and visualize a HEC-DSS file. Unlike other commercial or open source databases, HEC-DSS is not a relational database: HEC-DSS uses blocks (records) to store data within a HEC-DSS file and each HEC-DSS file can have numerous blocks (records), In addition to time series data and paired data in HEC-DSS, gridded data can also be stored in a HEC-DSS file.
HEC-DSSVue can be downloaded from here:
https://www.hec.usace.army.mil/software/hec-dssvue/

The source files structure will be changed on daily basis in informatica cloud

Requirement is, The source files structure will be changed on daily basis / dynamically. how we can achieve in Informatica could:
For example,
Let's consider the source is a flat file with different formats like with header, without header, different metadata(today file with 4 columns and tomorrow its 7 different columns and day after tomorrow without header , another day file with count of records in file)
I need to consume all dynamically changed files in one informatica cloud mapping. could you please help me on this.
This is a tricky situation. I know its not a perfect solution but here is my idea-
create a source file structure having maximum number of columns of type text, say 50. Read file, apply filter to cleanup header data etc. Then use router to treat files as per their structure - may be filename can give you a hint what it contains. Once you identify the type of file, treat,convert columns according to their data type and load into correct target.
Mapping would look like Source -> SQ -> EXP -> FIL -> RTR -> TGT1, TGT2
There has to be a pattern to identify the dynamic file structure.
HTH...
To summarise my understanding of the problem:
You have a random number of file formats
You don't know the file formats in advance
The files don't contain the necessary information to determine their format.
If this is correct then I don't believe this is a solvable problem in Informatica or in any other tool, coding language, etc. You don't have enough information available to enable you to define the solution.
The only solution is to change your source files. Possibilities include:
a standard format (or one of a small number of standard formats with information in the file that allows you to programatically determine the format being used)
a self-documenting file type such as JSON

How can i manipulate csv's from within c++

I am trying to create a program that can read out to a csv (comma separated). Is there a way to manipulate say the column width or whether a cell is left or right justified internally from my code so that when i open up the file in excel it looks better than a bunch of strings cramped into tiny cells. My goal is for the user to do as little thinking as possible. If they open up the file and have to size everything right just to see it that seems a little crummy.
CSV is a plain text file format. It doesn't support any visual formatting. For that, you need to write the data to another file format such as .xlsx or .ods.

How can I store the content of a *.css file (text file) with addidional information in a new file?

I have a textfile (*.css Cascading Style Sheets) file, which is a plain text.
Then I have additional program information, just some double and int values, which has noithing to do with the text file directly.
I would like to store that state in a file, so that when I open that file I have access to the content of the *.css and the double and int values.
So I would be able the applications last state with the text file content and those double and int values.
What would be the most effective way?
I guess, you'll want the result to still be usable as a CSS File. In that case, add a comment block with a marker at the beginning, where you can store your data in some ASCII-Format, e.g. JSON. Like e.g.:
/* --#--
{x=42, y=47.11, whatever="blabla"}
*/
/* here comes the original css */
Then you can easily find out, if the data is already there by looking for /* --#--. You can use existing JSON parsers to retrieve your data and existing JSON writers to generate the file. You don't have to parse the whole CSS, but only the comments with the marker.

Generate dictionary file from Stata data

I know that I can create a dta file if I have dat file and dictionary dct file. However, I want to know whether the reverse is also possible. In particular, if I have a dta file, is it possible to generate dct file along with dat file (Stata has an export command that allows export as ASCII file but I haven't found a way to generate dct file). StatTransfer does generate dct and dat file, but I was wondering if it is possible without using StatTransfer.
Yes. outfile will create dictionaries as well as export data in ASCII (text) form.
If you want dictionaries and dictionaries alone, you would need to delete the data part.
If you really want two separate files, you would need to split each file produced by outfile.
Either is programmable in Stata, or you could just use your favourite text editor or scripting language.
Dictionaries are in some ways a very good idea, but they are not as important to Stata as they were in early versions.