I wonder which method would be better to convert const char* string to NSString.
I found there are some facts.
[NSString stringWithCString:length:] is kind of deprecated.
[NSString stringWithCString:encoding] may be used for my purpose
But I want to convert with length AND encoding(because I want to make that fine if I have some Non-ASCII character by setting encoding to UTF-8). Any thoughts?
I just think now
create other char and copy with length by using std::strncpy(ch1, ch2, len)
use [NSString stringWithCString:encoding:]
But it doesn't work well.
If your const char* variable is named foo (and it points to a null-terminated string), just say
[NSString stringWithUTF8String:foo]
Because UTF-8 is a superset of ASCII, this will work whether foo points to UTF-8 or ASCII. Then you can move up to full Unicode with no problems.
Use:
[NSString initWithBytes:length:encoding].
Related
I need to convert from CString to double in Embedded Visual C++, which supports only old style C++. I am using the following code
CString str = "4.5";
double var = atof( (const char*) (LPCTSTR) str )
and resutlt is var=4.0, so I am loosing decimal digits.
I have made another test
LPCTSTR str = "4.5";
const char* var = (const char*) str
and result again var=4.0
Can anyone help me to get a correct result?
The issue here is, that you are lying to the compiler, and the compiler trusts you. Using Embedded Visual C++ I'm going to assume, that you are targeting Windows CE. Windows CE exposes a Unicode API surface only, so your project is very likely set to use Unicode (UTF-16 LE encoding).
In that case, CString expands to CStringW, which stores code units as wchar_t. When doing (const char*) (LPCTSTR) str you are then casting from a wchar_t const* to a char const*. Given the input, the first byte has the value 52 (the ASCII encoding for the character 4). The second byte has the value 0. That is interpreted as the terminator of the C-style string. In other words, you are passing the string "4" to your call to atof. Naturally, you'll get the value 4.0 as the result.
To fix the code, use something like the following:
CStringW str = L"4.5";
double var = _wtof( str.GetString() );
_wtof is a Microsoft-specific extension to its CRT.
Note two things in particular:
The code uses a CString variant with explicit character encoding (CStringW). Always be explicit about your string types. This helps read your code and catch bugs before they happen (although all those C-style casts in the original code defeats that entirely).
The code calls the CString::GetString member to retrieve a pointer to the immutable buffer. This, too, makes the code easier to read, by not using what looks to be a C-style cast (but is an operator instead).
Also consider defining the _CSTRING_DISABLE_NARROW_WIDE_CONVERSION macro to prevent inadvertent character set conversions from happening (e.g. CString str = "4.5";). This, too, helps you catch bugs early (unless you defeat that with C-style casts as well).
CString is not const char* To convert a TCHAR CString to ASCII, use the CT2A macro - this will also allow you to convert the string to UTF8 (or any other Windows code page):
// Convert using the local code page
CString str(_T("Hello, world!"));
CT2A ascii(str);
TRACE(_T("ASCII: %S\n"), ascii.m_psz);
// Convert to UTF8
CString str(_T("Some Unicode goodness"));
CT2A ascii(str, CP_UTF8);
TRACE(_T("UTF8: %S\n"), ascii.m_psz);
Found a solution using scanf
CString str="4.5"
double var=0.0;
_stscanf( str, _T("%lf"), &var );
This gives a correct result var=4.5
Thanks everyone for comments and help.
I want to convert string or char* to the _T but not able to do.
if i write
_tcscpy(cmdline,_T ("hello world"));
it works perfectly, but if i write
char* msg="hello world";
_tcscpy(cmdline,_T (msg));
it shows an error like: error C2065: 'Lmsg' : undeclared identifier
Please give me a solution.
Thanx in advance.
_T is a macro, defined as (if UNICODE is defined):
#define _T(a) L ## a
which can work only with string-literals. So when you write _T("hi") it becomes L"hi" which is valid, as expected. But when you write _T(msg) it becomes Lmsg which is an undefined identifier, and you didn't intend that.
All you need is this function mbstowcs as:
const char* msg="hello world"; //use const char*, instead of char*
wchar_t *wmsg = new wchar_t[strlen(msg)+1]; //memory allocation
mbstowcs(wmsg, msg, strlen(msg)+1);
//then use wmsg instead of msg
_tcscpy(cmdline, wmsg);
//memory deallocation - must do to avoid memory leak!
delete []wmsg;
_T only works with string literals. All it does is turn the literal into an L"" string if the code's being compiled with Unicode support, or leave it alone otherwise.
Take a look at http://msdn.microsoft.com/en-us/library/dybsewaf(v=vs.80).aspx
You need to use mbtowcs function.
You should also look at this article.
_T is a macro that makes string literals into wide-char string literals by prepending an L before the literal in UNICODE builds.
In other words, when you write _T("Hello") it is as if you had written "Hello" on an ANSI build or L"Hello" on a UNICODE build. The type of the resulting expression is char* or wchar_t* respectively.
_T can not convert a string variable (std::string or char*) to a wchar_t* -- for this, you have to use a function like mbstowcs or MultiByteToWideChar.
Suggestion: It will be much easier for you (and in no way worse) to always make a UNICODE build and forget about _T, TCHAR and all other T-derivatives. Just use wide-character strings everywhere.
_T is not an actual type. It's a macro that prepends string literals with L so that they would be wchar_t*s instead of char*. If you need to convert a char* string to wchar_t* one at runtime, you need mbtowcs for example.
The _T modifier is just a declaration to tell the compiler that the string literal must be interpreted as a utf-16 encoding. The reason it doesn't work on the variable is because the contents of that variable have already been declared as ascii.
As already mentioned the mbstowcs function is what you need to perform the conversion of a char data into utf-16 (wide char) data.
Hi
I use the following to convert std::string to NSString but it return (null) while trying to display the value of the nsstring
I need it to return instead of (null) empty at conversion time any suggestion
StudyDate=[NSString stringWithCString:studyDate length:strlen(studyDate)];
any suggestion to avoid null values
best regards
Edit: The syntax #"string" is used only for constructing NSString. With std::string you should use the standard "string" syntax.
NSString* aConstantNSString = #"foo";
const char* aConstantCString = "foo";
std::string aConstantStdString = "foo";
CFStringRef aConstantCFString = CFSTR("foo");
+stringWithCString:length: has been deprecated since the very early beginning of the iPhone SDK. If the string contains only ASCII characters, often you could use +stringWithUTF8String: instead.
Your method works only when studyDate is a C string (i.e. const char*), but you said you have a std::string. There is no method to directly convert a std::string into an NSString. You must use .c_str() to create the C string first:
StudyDate = [NSString stringWithUTF8String:studyDate.c_str()];
(But the above shouldn't be the cause you're getting (null) because passing a std::string to +stringWithCString:length: or even strlen should give a compile-time error immediately.
'error: cannot convert ‘std::string’ to ‘const char*’ in argument passing'
So studyDate should already be a const char*. We need more context (code) to see what's going on.)
NSString objCString = #"this is my objective c string";
std::string cppString; // this is your c++ string or however you declared it blah blah blah
cppString = [objCString UTF8String];
// this is the conversion of an NSString into a c++ string
// not sure if it will work for c strings but you can certainly try
thats all there is too it im afraid. all you need is just that one line of code. this was done in ios sdk 4.3 btw so im not sure if the coding will change if you appiled it elsewhere.
here's what I am trying to do:
typedef uint16_t uchar16_t;
uchar16_t buf[32];
// buf will contain timezone information like GMT-6, Eastern Daylight Time, etc
char * str = "Test";
for (int i = 0; i <= strlen(str); i++)
buf[i] = str[i];
I guess that's not correct since uchar16_t would contain 2 bytes and str contains 1 byte.
What is it that I am supposed to do ?
Strlen? buf[32]? Trying to destroy the universe?
You want to use a wstringstream.
std::wstringstream lols;
lols << "Test";
std::wstring cakes;
lols >> cakes;
Edit#Comment:
You shouldn't use strlen because any decent string system allows embedded zeros, and strlen is seriously slow. In addition, you didn't resize your buffer as needed, so if you had a string of size > 31 you would get a buffer overflow. In addition, you would have to (if you did dynamically size your buffer) manually free it afterwards. Both of these things are serious failings of the C string system. My example code makes your standard library writer do all the work and avoid all these problems for you.
That's actually OK if your string will always be ASCII. To do it correctly, the portable function is mbstowcs which assumes you're converting from the default locale or if you're on Windows then there's API functions that let you specify the source code page explicitly.
Your code will work, as long as str is ASCII; calling strlen() in the loop condition is probably a bad idea, though. It might be easier to just use swprintf() if it's available on your system:
uchar16_t buf[32];
char *str = "Test";
swprintf(buf, sizeof buf, "%s", str);
Have a look here.
Also, is there a good reason you are defining your own type?
If you have a (narrow) char string, you cannot convert it to
a wchar_t string by setting your locale to "C" and then passing
the string through mbstowcs(). That's because the "C" locale specifies
a -particular- character encoding, and that encoding might not match
the encoding of the execution character set, so mbstowcs() might
map the characters to something unexpected, or could even fail
(if the execution character set happened to use encodings that
were incompatible with the encoding structure for the C locale
character set.)
Thus, in order to convert a char
string into a wider string, you have
to copy the chars one by one into an
array of wchar_t . If you need to work
with Unicode or utf-16 or whatever
after that, then wcstombs() is what
you should look at.
I'm trying to make changes to some legacy code. I need to fill a char[] ext with a file extension gotten using filename.Right(3). Problem is that I don't know how to convert from a CStringT to a char[].
There has to be a really easy solution that I'm just not realizing...
TIA.
If you have access to ATL, which I imagine you do if you're using CString, then you can look into the ATL conversion classes like CT2CA.
CString fileExt = _T ("txt");
CT2CA fileExtA (fileExt);
If a conversion needs to be performed (as when compiling for Unicode), then CT2CA allocates some internal memory and performs the conversion, destroying the memory in its destructor. If compiling for ANSI, no conversion needs to be performed, so it just hangs on to a pointer to the original string. It also provides an implicit conversion to const char * so you can use it like any C-style string.
This makes conversions really easy, with the caveat that if you need to hang on to the string after the CT2CA goes out of scope, then you need to copy the string into a buffer under your control (not just store a pointer to it). Otherwise, the CT2CA cleans up the converted buffer and you have a dangling reference.
Well you can always do this even in unicode
char str[4];
strcpy( str, CStringA( cString.Right( 3 ) ).GetString() );
If you know you AREN'T using unicode then you could just do
char str[4];
strcpy( str, cString.Right( 3 ).GetString() );
All the original code block does is transfer the last 3 characters into a non unicode string (CStringA, CStringW is definitely unicode and CStringT depends on whether the UNICODE define is set) and then gets the string as a simple char string.
First use CStringA to make sure you're getting char and not wchar_t. Then just cast it to (const char *) to get a pointer to the string, and use strcpy or something similar to copy to your destination.
If you're completely sure that you'll always be copying 3 characters, you could just do it the simple way.
ext[0] = filename[filename.Length()-3];
ext[1] = filename[filename.Length()-2];
ext[2] = filename[filename.Length()-1];
ext[3] = 0;
I believe this is what you are looking for:
CString theString( "This is a test" );
char* mychar = new char[theString.GetLength()+1];
_tcscpy(mychar, theString);
If I remember my old school MS C++.
You do not specify where is the CStringT type from. It could be anything, including your own implementation of string handling class. Assuming it is CStringT from MFC/ATL library available in Visual C++, you have a few options:
It's not been said if you compile with or without Unicode, so presenting using TCHAR not char:
CStringT
<
TCHAR,
StrTraitMFC
<
TCHAR,
ChTraitsCRT<TCHAR>
>
> file(TEXT("test.txt"));
TCHAR* file1 = new TCHAR[file.GetLength() + 1];
_tcscpy(file1, file);
If you use CStringT specialised for ANSI string, then
std::string file1(CStringA(file));
char const* pfile = file1.c_str(); // to copy to char[] buffer