Extracting ASCII data from SAP to other systems - s4hana

We are exporting some data from SAP to Azure. This includes a field containing URL information. SAP stores this in ASCII format.
How this data can be converted to text data in this other system? Is there some standard code/libraries (e.g. Java, Python) available?
More information - when the are seeing data using SAP Logon, it shows as a string "FF060201....." which looks like ASCII Hex. But when I try an online ASCII converter like http://www.unit-conversion.info/texttools/ascii/, it is unable to convert and show the URL (displaying some junk characters). Is this because SAP Logon screen is displaying the data different way whereas actual ASCII data stored in different?
Thanks in advance for any pointers/ help.
Regards,
S. Das
Edit:
This is how I am searching the table using SAP Logon, and seeing the data (stored in the column named Data)

Related

Google BigQuery EXPORT DATA csv file on storage - issue special characters write badly

I need to export data of a BigQuery table into csv on Google Cloud Storage.
I used the following:
EXPORT DATA
OPTIONS(
uri=concat(path_file_output,'_*.csv'),
format='CSV',
overwrite=true,
header=true,
field_delimiter=';'
)
AS
SELECT * FROM my_bigquery_table
In my_bigquery_table there are string columns with the character '€' that are badly changed during the export
for example: a field with '1234.56 €' is changed with '1234.56 â'.
Exist a way to avoid this?
on the google documentation :https://cloud.google.com/bigquery/docs/reference/standard-sql/other-statements
there aren't any other options for the export
Microsoft will be always Microsoft... Be reading the comments, the problem comes from Excel, and the default encoding format.
So, let me explain. Your system doesn't use a UTF-8 encoding format. In France, my system uses ISO8859 encoding type, and when you open a file with Excel, it doesn't understand. Same thing if you have a coma separated value (the meaning of CSV) that you import in Excel, it doesn't work in France (we have the habit to use semi-colon separated value).
Anyway. There isn't straight forward solution to open the file with Excel. But you can do it.
Open Excel, and open a blank notebook
Go to Data, Get Data, from text
Select your file and click on "get data"
Then you can configure your import. Select UTF-8 as File Origin
And then continue with other parameters. You can see a sample of your file and the result that you will get.
Note: I have nothing against microsoft, but when it comes to development, Microsoft is a trap nest...

What are the differences between Object Storages for example S3 and a columnar based Technology

I was thinking about the difference between those two approches.
Imagine you must handle information about pattern calls, which later should be
displayed to the user. A pattern call is a tuple consisting of a unique integer
identifier ("id"), a user defined name (“name"), a project relative path to the so
called pattern file ("patternFile") and a convenience flag, which states whether
the pattern should be called or not called. And the number of tuples are not known before and they won't be modified after initialization.
I thought that in this case a column based approach with big query for example would be better in terms of I/O and performance as well as the evolution of the schema. But actually I can't understand why. I would appreciate any help.
Amazon S3 is like a large key-value store. The Key is the filename (with full path) and the Value is the contents of the file. It's just a blob of data.
A columnar data store organizes data in such a way that specific data can be "jumped to", and only desired values need to be read from disk.
If you are wanting to perform a search on the data, then some form of logic is required on the data. This could be done by storing data in a database (typically a proprietary format) or by using a columnar storage format such as Parquet and ORC plus a query engine that understands this format (eg Amazon Athena).
The difference between S3 and columnar data stores is like the difference between a disk drive and an Oracle database.

Informatica - Number Datatype issue

I am loading data from flat-file to oracle table.
In flat-file I have a field, which holds the value like "871685900000027865" and its datatype in SourceQualifier is Decimal.
But in oracle target it is loading as
"8.71685900000003E17"
While running Debugger, I found out that, In the Source Qualifier itself data is changed to exponential form.
Please suggest an easy approach to load data as it is into target.
Client Screenshot For reference
Use "Enable High Precision" session property.
I'd also add that in the Flat File it's a string. Flat files do not have any datatype definititions - these are just flat text files. So once you've specified Decimal in Source Qualifier, it tries to do the conversion for you. And with High Precision not enabled, it will use the exponential form. This is by design.
But again: what you get from DB strictly depends on the table definition and client tool that you're using. Can you share or check the table definition? If the column is decimal, it should not store data in this form.

Facebook Ads Insights API reportstats endpoint

I'm using reportstats edge to download some reports in CSV format. (It probably applies to XLS as well)
What I've noticed:
headers have different descriptions than the data columns parameters - is there a resource describing the mapping? (eg. adgroup_id -> 'Ad ID', adgroup_name -> 'Ad Name', unique_impressions -> 'Reach'...
will the order of csv columns be as defined in data_columns param?
! some columns are not returned in csv format - two I've identified so far are inline_actions and unique_social_clicks - the column is skipped in csv format but available in json - is it a bug or there is a reason for that?
general question - does csv format require pagination or I will always get all of the data?
value mapping - the constant values in csv/xls format have different labels, eg. placement(desktop_feed -> 'News Feed on Desktop Computers'), Is there a resource describing all the possible values?
asynchronous report requests - it happens quite often that although I'm checking the report_run_id for async_percent_completion the data is still not available when it should . I'm getting a text response No data available.. I need to retry and then it's usually available. Is this expected?
Thanks!
different names in API and XLS are intentional; API developers prefer naming consistent with the rest of Ads API, but people using XLS exports are often not developers and prefer human-friendly naming
you can use export_columns to define the order
inline_actions/unique_social_clicks - not sure, maybe these might be deprecated
it will give you all of the data
I don't think there's public resource for mapping between placement values :-(
you need to check report_run_id for the job status (field "async_status"), that should work reliable; once it's "Job Completed" you should be able to get the data

C++ persistence in database

I would like to persist some of my objects in a database (this could be relational (postgresql or MariaDB) or MongoDB). I have found a number of libraries that seem potentially useful, but I am missing the overall picture.
I have used boost::serialization serialize c++ to xml / binary, but it is not clear to me how to get this into the database (do I use the binary or xml format?)?
How do I get this into my mongoDB or postgresql?
You'd serialize to binary, as it is smaller and much faster. Also, the XML format isn't really pretty/easy to use outside of Boost Serialization anyways.
WARNING: Use Boost Portable Archive (EPA http://epa.codeplex.com/) if you need to use the format across different machines.
You'd usually store it in a column
text or CLOB (character large object) by encoding in base64 and putting that in the Database native charset (base64 is safe even for ASCII)
BLOB (binary large object) which doesn't bring the need to encode and could be more efficient storage wise.
Note: if you need to index, store the index properties in normal database columns.
Finally, if you like, I have recently made a streambuffer that allows you to stream data directly into a Sqlite BLOB column. Perhaps you can glean some ideas from this you could use:
How to boost::serialize into a sqlite::blob?