<Binary> in sql - c++

I want to select all the binary data from a column of a SQL database (SQL Server Enterprise) using C++ query. I'm not sure what is in the binary data, and all it says is .
I tried this (it's been passed onto me to study off from) and I honestly don't 100% understand the code at some parts, as I commented):
SqlConnection^ cn = gcnew SqlConnection();
SqlCommand^ cmd;
SqlDataAdapter^ da;
DataTable^ dt;
cn->ConnectionString = "Server = localhost; Database=portable; User ID = glitch; Pwd = 1234";
cn->Open();
cmd=gcnew SqlCommand("SELECT BinaryColumn FROM RawData", cn);
da = gcnew SqlDataAdapter(cmd);
dt = gcnew DataTable("BinaryTemp"); //I'm confused about this piece of code, is it supposed to create a new table in the database or a temp one in the code?
da->Fill(dt);
for(int i = 0; i < dt->Rows->Count-1; i++)
{
String^ value_string;
value_string=dt->Rows[i]->ToString();
Console::WriteLine(value_string);
}
cn->Close();
Console::ReadLine();
but it only returns a lot of "System.Data.DataRow".
Can someone help me?
(I need to put it into a matrix form after I extract the binary data, so if anyone could provide help for that part as well, it'd be highly appreciated!)

dt->Rows[i] is indeed a DataRow ^. To extract a specific field from it, use its indexer:
array<char> ^blob=dt->Rows[i][0];
This extracts the first column (since you have only one) and returns an array representation of it.
To answer the question in your code, the way SqlDataAdapter works is like this:
you build a DataTable to hold the data to retrieve. You can fill in its columns, but you're not required to. Neither are you required to give it a name.
you build the adapter object, giving it a query and a connection object
you call the Fill method on the adapter, giving it the previously created DataTable to fill with whatever your query returns.
and you're done with the adapter. At this point you can dispose of it (for example inside a using statement if you're using C#).

Related

How to convert list into DataFrame in Python (Binance Futures API)

Using Binance Futures API I am trying to get a proper form of my position regarding cryptocurrencies.
Using the code
from binance_f import RequestClient
request_client = RequestClient(api_key= my_key, secret_key=my_secet_key)
result = request_client.get_position()
I get the following result
[{"symbol":"BTCUSDT","positionAmt":"0.000","entryPrice":"0.00000","markPrice":"5455.13008723","unRealizedProfit":"0.00000000","liquidationPrice":"0","leverage":"20","maxNotionalValue":"5000000","marginType":"cross","isolatedMargin":"0.00000000","isAutoAddMargin":"false"}]
The type command indicates it is a list, however adding at the end of the code print(result) yields:
[<binance_f.model.position.Position object at 0x1135cb670>]
Which is baffling because it seems not to be the list (in fact, debugging it indicates object of type Position). Using PrintMix.print_data(result) yields:
data number 0 :
entryPrice:0.0
isAutoAddMargin:True
isolatedMargin:0.0
json_parse:<function Position.json_parse at 0x1165af820>
leverage:20.0
liquidationPrice:0.0
marginType:cross
markPrice:5442.28502271
maxNotionalValue:5000000.0
positionAmt:0.0
symbol:BTCUSDT
unrealizedProfit:0.0
Now it seems like a JSON format... But it is a list. I am confused - any ideas how I can convert result to a proper DataFrame? So that columns are Symbol, PositionAmt, entryPrice, etc.
Thanks!
Your main question remains as you wrote on the header you should not be confused. In your case you have a list of Position object, you can see the structure of Position in the GitHub of this library
Anyway to answer the question please use the following:
df = pd.DataFrame([t.__dict__ for t in result])
For more options and information please read the great answers on this question
Good Luck!
you can use that
df = pd.DataFrame([t.__dict__ for t in result])
klines=df.values.tolist()
open = [float(entry[1]) for entry in klines]
high = [float(entry[2]) for entry in klines]
low = [float(entry[3]) for entry in klines]
close = [float(entry[4]) for entry in klines]

How to normalize fields delimited by colon thats into a single column in informatica cloud

I need help to normalize the field "DSC_HASH" inside a single column delimeted by colon.
Input:
Outuput:
I achieved what I needed with java transformation:
1) In java transformation I created 4 output columns: COD1_out, COD2_out, COD3_out and DSC_HASH_out
2) Then I put the following code:
String [] column_split;
String column_delimiter = ";";
String [] column_data;
String data_delimiter = ":" ;
Column_split = DSC_HASH.split(column_delimiter);
COD1_out = COD1;
COD2_out = COD2;
COD3_out = COD3;
for (int I =0; i < column_split.length; i++){
column_data = column_split[i].split(data_delimiter);
DSC_HASH_out = column_data[0];
generateRow();
}
There are no generic parsers or loop construct in Informatica that can take one record and output an arbitrary number of records.
There are some ways you can bypass this limitation:
Using the Java Transformation, as you did, which is probably the easiest... if you know Java :) There may be limitations to performance or multi-threading.
Using a Router or a Normalizer with a fixed number of output records, high enough to cover all your cases, then filter out empty records. The expressions to extract fields are a bit complex to write (an maintain).
Using the XML Parser, but you have to convert your data to XML before, and design an XML schema. For example your first line would be changed in (on multiple lines for readability):
<e><n>2320</n><h>-1950312402</h></e>
<e><n>410</n><h>103682488</h></e>
<e><n>4301</n><h>933882987</h></e>
<e><n>110</n><h>-2069728628</h></e>
Using SQL Transformation or Stored Procedure Transformation to use database standard or custom functions, but that would result in an SQL query for each input row, which is bad performance-wise
Using a Custom Transformation. Does anyone want to write C++ for that ?
The Java Transformation is clearly a good solution for this situation.

Insert JSON format in Mysql query using C++

I am using JSON format to save data in my c++ program , i want to send it to MySql database (the table tab has one column with type : TEXT) but the query failed (tested also VARCHAR and CHAR )
this is a part of the code since we are not interrested in the rest
string json_example = "{\"array\":[\"item1\",\"item2\"], \"not an array\": \"asdf\"}";
mysql_init(&mysql); //initialize database connection
string player="INSERT INTO tab values (\"";
player+= json_example;
player += "\")";
connection = mysql_real_connect(&mysql,HOST,USER,PASSWD,DB,0,NULL,0);
// save data to database
query_state=mysql_query(connection, player.c_str()); // use player.c_str()
to show the final query that will be used : cout << player gives :
INSERT INTO tab values ("{"array":["item1","item2"], "not an
array": "asdf"}")
using for example string json_example = "some text"; is working
but with the json format it is not working , maybe the problem came from the use of curly bracket {} or double quotes "" but i haven't find a way to solve it .
i'm using :
mysql Ver 14.14 Distrib 5.5.44, for debian-linux-gnu (armv7l) under raspberry pi 2
Any help will be appreciated , thanks .
Use a prepared statement. See prepared statements documentation in the MySQL reference manual.
Prepared statements are more correct, safer, possibly faster, and keep your code cleaner. You get all those benefits and don't need to escape anything. There is hardly a reason not to use them.
Something like this might work. But take it with a grain of salt, because I have not tested or compiled it. It should just give you the general idea:
MYSQL_STMT* const statement = mysql_stmt_init(&mysql);
std::string const query = "INSERT INTO tab values(?)";
mysql_stmt_prepare(statement, query, query.size());
MYSQL_BIND bind[1] = {};
bind[0].buffer_type = MYSQL_TYPE_STRING;
bind[0].buffer = json_example.c_str();
bind[0].buffer_length = json_example.size();
mysql_stmt_bind_param(statement, bind);
mysql_stmt_execute(statement);

Can protocol buffer be partial updated?

I am new to protobuf and here is my question: Can protocol buffer support partial update?
For example, I have such messages:
package model.test;
message Person{
required int32 id = 1;
required string name = 2;
repeated PhoneNumber phone = 3;
}
enum PhoneType{
MOBILE = 0;
HOME = 1;
WORK = 2;
}
message PhoneNumber{
required string number = 1;
optional PhoneType type = 2 [default = HOME];
}
Now the data I have like that:
model::test::Person person;
person.set_id(1);
person.set_name("Jack");
model::test::PhoneNumber* _phone3 = person.add_phone();
_phone3->set_number("123567");
_phone3->set_type(model::test::MOBILE);
model::test::PhoneNumber* _phone4 = person.add_phone();
_phone4->set_number("347890");
_phone4->set_type(model::test::WORK);
The case is that when only work phone number is changed, I have to update the whole person object with the following codes.
fstream out("User.txt", ios::out | ios::binary | ios::trunc);
person.SerializePartialToOstream(&out);
But it is not efficient to do that. I want to only update the PhoneNumber, Is there any partial update in protolbuf or something like that?
Protocol buffers are actually designed such that concatenation is the same as merge, and that the last field wins when merging (except for repeated, which are added). In your case, you should actually be able to serialize a blob containing just the phone-number set, and append this data, and it will over-ride the earlier value. This, however, only works well for the root object. Which yours: isn't. And it doesn't work for repeated, which yours: is.
I don't think there is any support for what you want to do. If you think about it, it doesn't really make sense that some sort of partial update serialization would exist in the first place. For protobuf to be able to manipulate an object that is serialized in a file on disk, it needs to read and deserialize the whole object so it knows what fields have been previously populated. Then when serializing and writing the updated object back to disk, you're going to have to overwrite the old file no matter what you do (i.e. you can't shove extra bytes into a file on the file system without overwriting the original file completely).

IP Address storing, sorting, and filtering in MongoDB

I'm trying to figure out how to store IP addresses in a database, particularly MongoDB, so I can easily sort and filter on these addresses. I've looked over several questions:
Most efficient way to store IP Address in MySQL
save IP address in mongoDB
equivalent of INET_ATON() in mongodb
I've implemented an answer from the last question:
// ip example: 192.168.2.1
function inet_aton(ip){
// split into octets
var a = ip.split('.');
var buffer = new ArrayBuffer(4);
var dv = new DataView(buffer);
for(var i = 0; i < 4; i++){
dv.setUint8(i, a[i]);
}
return(dv.getUint32(0));
}
// num example: 3232236033
function inet_ntoa(num){
var nbuffer = new ArrayBuffer(4);
var ndv = new DataView(nbuffer);
ndv.setUint32(0, num);
var a = new Array();
for(var i = 0; i < 4; i++){
a[i] = ndv.getUint8(i);
}
return a.join('.');
}
This actually works wonderfully. I store my IPs as ints, and I convert them back to IPs before I send them to my UI, and because I'm storing them as ints, sorting is a freebie.
The problem becomes filtering. If a user wants to look for an IP that starts with 102.1*, there doesn't seem like a reasonable approach to do this, especially if the user wants to use regexes. If they search for a full IP, that's no problem, but partial matching is a nuisance.
Does anyone have any insight into this issue? I'd love to hear any thoughts.
I guess you can opt for duplication and have two fields in the same document, one with the integer value of your ip address and the other with the string form of it. This would be a one time process during the write. If you have to sort, you can use the integer form and if you have to do any regex searches, you can use the string form of it. All you have to do is to spare the extra space occupied!