What is the default charset for base64Decode() function in WSO2 ESB? - wso2

I am trying to decode a file using base64Decode() function in XPath in WSO2 ESB but I am failling to do that.
What is the default charset for base64Decode()?

The default charset (or encoding) depends on the default encoding of the JVM, which again depends on the OS.
Windows has the default encoding ANSI (cp1252)
Linux usually uses UTF-8.
So you may set the charset with a JVM parameter yourself if you want to be sure.

Related

Qt4 - Access Windows Machine Guid from registry

I want to access Windows Operating System's machine guid stored in windows registry. I am using Qt 4.8 on Windows 8 OS. I get an empty string only. I am following Qt's QSettings official doc
QSettings setting("HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Cryptography",QSettings::NativeFormat);
QString mGuid=setting.value("MachineGuid").toString();
qDebug()<<"Machine Guid is: " <<mGuid;
qDebug()<<setting.status(); // returns zero means 'no access error'
Output:
Machine Guid is:
I can seed MachineGuid in HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Cryptography in Windows registry editor. How to read guid from windows registry?
Your problem is that Windows has a separate 64-bit and 32-bit registry view.
By default, you get the registry view that matches your application's target architecture. In this case, you're actually being redirected to the 32-bit view, so your code is checking the key located at HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Cryptography
The MachineGuid key, along with many other Windows-created keys, only exist in the 64-bit registry on 64-bit systems. You can specify which registry view to access from the native API, but I don't know of any way to do it through Qt.
As #Collin Dauphinee pointed out, you are probably accessing the 32-bit registry on a 64-bit windows.
If that's the case and you have access to Qt 5.7, instead of writing
QSettings setting("HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Cryptography",QSettings::NativeFormat);
QString mGuid=setting.value("MachineGuid").toString();
You can write
QSettings setting("HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Cryptography",QSettings::Registry64Format);
QString mGuid=setting.value("MachineGuid").toString();
That works perfectly for myself.

signing a structure in windows and reading it in linux

I have a structure such as follow:
struct mydata
{
int a,
int b,
}
I want to fill it in Windows and then send it to somebody to read it in Linux. I am writing both applications.
The aim is that user in the middle should not be able to change the data, but he can read it.
The user may have access to source code of Linux code, but not windows application.
My questions are:
1- How can I do this? My first idea is to create a hash from structure, encrypt it with private/public key and send to user (in windows). On Linux decrypt it and check that has code match data. Is this the best solution?
2- What type of library I can use? The library should be available on windows and Linux.
3- Is there any sample code that give me a starting point?
Edit 1
The question is more about how to make sure that data is not tampered with as it is transferred between windows system and Linux one via file copy (file on a sd card or via email). So the question is more about how to make sure that data is tamper proof and not how to transfer it.
Edit 2
I need to send the data to Linux system as a structure written into a file ( a binary file that when read by application on Linux, mapped into a structure and then used by application). So effectively I have it as a structure on windows, then I need to sign it and write it into a file and send to Linux computer. On Linux computer, application need to read it, check that it is not tampered and then use the data.
My question is how to sign the data.
1.) Via Network and sockets
2.) POSIX socket API, similar for windows and linux, forget about encription in the first place
3.) http://beej.us/guide/bgnet/
First, you need to find a way to encode/decode your structure content into a byte array which may be platform agnostic (in term of endianness, 32/64 bits, string encoding...). You should take a closer look to ASN.1. ASN.1 aims to provide an unambiguous and a software & hardware independent data encoding. There is multiple libraries providing ASN.1 encoder and decoder such as OpenSSL or Boost (even if there are not always well documented)
For tamper resistance, there is a cryptographic standard for message exchange called CMS (Cryptographic Message Syntax, aka PKCS#7) specified in RFC 5652. This standard defines multiple message types providing tamper resistance feature: signed-data and authenticated-data. OpenSSL only supports signed-data messages signed-data requires public key cryptography. Authenticated-data is easier to use since it only requires to share a commons secret key. Even if I do not know a library which supports authenticated-data.

Difference and conversions between wchar_t for Linux and for Windows

I understand from this and this thread that in Windows, wchar_t is 16-bit & for Linux, wchar_t is 32 bit.
I have a client-server architecture (using just pipes - not sockets)- where my server is Windows based and client is Linux.
Server has a API to retrieve hostname from client. When the client is Windows based, it could just do GetComputerNameW and return Wide-String.
However, when the client is Linux based, things get messy.
As a first naive approach, I used mbstowcs() hoping to return wchar_t* to Windows server-side.
However, this LPWSTR (I have typedef wchar_t* LPWSTR on my linux clinet side) is not recognizable on Windows since it expects its wchar_t to be 16-bit.
So, converting the output of gethostname() on linux - which is in char* to unsigned short (16-bit) my only option?
Thanks in Advance!
You will have to decide on the actual protocol on how to transport the data across the wire. Several options here although probably UTF-8 is usually the most sensible one - also that means that under linux you can basically just use the data as-is (no reason to use wchar_t to begin with, although you obviously can convert it into whatever you want).
Under Windows you will have to convert the UTF-8 into UTF-16 (yes not exactly, but oh well) which windows wants and if you want to send data you have to convert it to UTF-8. Luckily windows provides this respectively this function for exactly these purposes.
Obviously you can decide on any encoding you want to not necessarily UTF-8, the process is the same: When receiving data convert it to the native format of the OS, when sending convert it to your on-wire encoding. iconv works on linux if you don't use utf-8.
You are best off choosing a standard character encoding for the data you send over the pipe, and then require all machines to send their data using that encoding.
Windows uses UTF-16LE, so you could choose to use UTF-16LE over the pipe and then Windows machines can send their UTF-16LE encoded strings as-is, but Linux machines would have to convert to/from UTF-16LE as needed.
Or you could choose UTF-8 instead, which would reduce network bandwidth, but both Windows and Linux machines would have to convert to/from UTF-8 as neded. For network communications, UTF-8 would be the better choice.
On Windows, you can use MultiByteToWideChar() and WideCharToMultiByte() with the CP_UTF8 codepage.
In Linux, use the iconv() API so you can specify the UTF-8 charset for encoding/decoding.

How to change the CP_ACP(0) of windows ANSI apis in an application?

I try to draw text using a dll library which has only interfaces of ANSI version encapsulated windows ANSI apis, but I need to store string data using utf-8. I don't want to convert strings using MultiByte/WideChar functions so I want an approach to change the CP_ACP in my application, so that I can input string data into ANSI apis. thanks.
ps: I don't want to change the system default codepage.
CP_ACP represents the system Ansi codepage. You cannot change that on a per-process or per-thread basis. It is a system-wide setting. If the DLL really is dependant on CP_ACP internally, then you have no choice but to convert your from/to UTF-8 whenever you interact with the DLL.
Starting with Windows 10 v1903, you can use the application manifest to set the active code page for a given process, which might be different from the system-wide code page :
<assembly manifestVersion="1.0" xmlns="urn:schemas-microsoft-com:asm.v1">
<assemblyIdentity type="win32" name="..." version="6.0.0.0"/>
<application>
<windowsSettings>
<activeCodePage xmlns="http://schemas.microsoft.com/SMI/2019/WindowsSettings">UTF-8</activeCodePage>
</windowsSettings>
</application>
</assembly>
Obviously, if you need to support older versions of Windows, the application must still be aware that CP_ACP might not be CP_UTF8 and perform any necessary conversions itself.
More details can be found in Microsoft Docs.
UTF8 is not a codepage, and as codepages only make sense to ANSI functions, you can't do what you're asking.
If you want to store string as UTF8, you WILL need to convert from the ANSI of your app to unicode (wide char) using MultiByteToWideChar() then use WideCharToMultiByte() to convert to UTF8.
Alternatively, update you app to use unicode/wide strings internally, and convert as needed.
"How to change the CP_ACP?" - "I don't (want) to change the system default codepage."
Well, you have to choose. CP_ACP is the system default codepage.

How to get OS language using C++ API?

I am in the process of developing a application which displays dialogs depending on the OS language. How I can get the OS language using C++ or Windows APIs (Windows 2008/Vista/7)?
There are several functions to do this in Windows, depending on what format you want the information in. Prior to Windows Vista, the language information was encoded into a LCID (Locale Id) which includes language, as well as some information about sorting and formatting.
For Windows Vista and Windows 7, a more flexible system called Locale Names was devised.
GetSystemDefaultLocaleName
Use this if you want to work on Win2k and WinXP.
GetSystemDefaultLCID
The accepted answer to this question is wrong. You should not make user interface decisions based on the default locale. Use GetDefaultUILanguage for this.
Do you resolve this problem?
If answer is No,
LPWSTR lpLocalName=NULL is wrong.
LPWSTR lpLocalName=NULL ----> WCHAR localName[LOCALE_NAME_MAX_LENGTH] is right.
Because No memory allocation is in GetUserDefaultLocalName.