Both filed values are not captured from oracle apex dialog box - oracle-apex

I am trying to get 2 field values from 2 different model dialog box.
PAGE - 1
+-------------+
| Button 1 | ---Open 1st dialog box, after putting value it return same value to item
+-------------+ field1 when dialog closed.
+-------------+
| Item Field1 |
+-------------+
+-------------+
| Button 2 | ---Open 2nd dialog box after putting value it return same value to item
+-------------+ field2 when dialog closed.
+-------------+
| Item Field2 |
+-------------+
My issue is that both input values are not capturing into both item fields. when i am adding 2nd one from dialog the first item field getting empty.

Related

How do I change styles to an already created list control?

I have created
pMyListControl = new CListCtrl;
pMyListControl->Create(WS_CHILD| WS_VISIBLE | WS_VSCROLL | LBS_NOTIFY | LVS_REPORT | LVS_SINGLESEL, rect, pTabControl, LIST_ID);
On some tabs the list will only allow single select like the above code, but on some tabs it will not have LVS_SINGLESEL to allow for multiple item selections.
The list is created with a default tab that doesn't allow for multiple selection.
Can I change the style, without having to create a new listcntrl depending on my tab selection? Is there a method for this?
You can use the CWnd::ModifyStyle() method, eg:
// turn on single selection
pMyListControl->ModifyStyle(0, LVS_SINGLESEL);
// turn off single selection
pMyListControl->ModifyStyle(LVS_SINGLESEL, 0);

DynamoDB one-to-many relation denormalization or adjacency?

I am designing a table for a data structure that represents a business operation that can be performed either ad hoc or as part of a batch. Operations that are performed together as a batch must be linked and queryable, and there is meta data on the batch that will be persisted.
The table must support 2 queries: retrieve history, both ad hoc and batch instances.
Amazon suggests 2 approaches, adjacency and denormalization.
I am not sure which approach is best. Speed will be a priority, cost secondary.
This will be a multi-tenant database with multiple organizations with million+ operations. (Orgs will be a part of partition key to segregated these across nodes)
Here are the ideas I've come up with:
Denormalized, non adjacency - single root wrapper object with 1 (ad hoc) or more (batch) operation data.
Denormalized, adjacency - top level keys consist of operation instances (ad hoc) as well as parent objects containing collection of operation instances (batch)
Normalized, non adjacency, duplicate data - top level consists of operations instances, with or without a batch key, abd batch information duplicated among all members of batch
Is there a standard best practice? Any advice on setting up/generating keys?
Honestly it confuses me to understand the concept with these terms in NoSQL most specific DynamoDB. For me, hard to design the dynamodb table based on piece by piece of the whole business process. And precisely, I am more worried to data sizes instead of the speed in DynamoDB request due to we have 1MB limit per request. In other words, I should forget all things about relational DB concept and see the data as json object when working with dynamodb.
But well, for very simple one-to-many (i.e person loves some fruits) design I will have my best scheme choice has String PartitionKey. So my table will be like so :
|---------------------|---------------------------------------|
| PartitionKey | Infos |
|---------------------|---------------------------------------|
| PersonID | {name:String, age:Number, loveTo:map} |
|---------------------|---------------------------------------|
| FruitID | {name:String, otherProps} |
|---------------------|---------------------------------------|
The sample data :
|---------------------|---------------------------------------|
| PartitionKey | Infos |
|---------------------|---------------------------------------|
| person123 | { |
| | name:"Andy", |
| | age:24, |
| | loveTo:["fruit123","fruit432"] |
| | } |
|---------------------|---------------------------------------|
| personABC | { |
| | name:"Liza", |
| | age:20, |
| | loveTo:["fruit432"] |
| | } |
|---------------------|---------------------------------------|
| fruit123 | { |
| | name:"Apple", |
| | ... |
| | } |
|---------------------|---------------------------------------|
| fruit432 | { |
| | name:"Manggo", |
| | ... |
| | } |
|---------------------|---------------------------------------|
But let's see more complex case, for the sample chat app. Each channel allows many users, Each user is possible to join in any channels. Whether it should be one-to-many or many-to-many and how to make the relation? I will say I don't care about them. If we think as like relational DB, what a headache! In this case I will have composite SortKey and even Secondary Index to speedup the specific query.
So, the question what's the whole business process you work on will
help us to design the table, not piece by piece

Django with Postgres: How To Allow Null as Default in Django Model

I am using Django with Postgres, and trying to load data from csv to table. However, since one filed is Geometry Field, so I have to leave it blank when I load the table(otherwise \copy from will fail).
Here's my model:
name = models.CharField(max_length=200)
lat = models.DecimalField(max_digits=6, decimal_places=4, default=Decimal('0.0000'))
lon = models.DecimalField(max_digits=7,decimal_places=4, default=Decimal('0.0000'))
geom = models.PointField(srid=4326, blank=True, null=True, default=None)
and after migration, I ran psql like this:
mydb=>\copy target(name,lat,lon) from 'file.csv' DELIMITER ',' CSV HEADER ;
and I get error like this:
ERROR: null value in column "geom" violates not-null constraint
DETAIL: Failing row contains (name1, 30.4704, -97.8038, null).
CONTEXT: COPY target, line 2: "name1, 30.4704, -97.8038"
and here's the portion of csv file:
name,lat,lon
name1,30.4704,-97.8038
name2,30.3883,-97.7386
and here's the \d+ target:
mydb=> \d+ target
Table "public.target"
Column | Type | Modifiers | Storage | Stats target | Description
------------+------------------------+-----------+----------+--------------+-------------
name | character varying(200) | not null | extended | |
lat | numeric(6,4) | not null | main | |
lon | numeric(7,4) | not null | main | |
geom | geometry(Point,4326) | | main | |
Indexes:
"target_pkey" PRIMARY KEY, btree (name)
"target_geom_id" gist (geom)
So I guess geom is set to null when loading csv to the table? How can I fix it? I want to set the default of filed geom is null so that I can update it use other query.
Thanks very much!!
Couple different options, described here.
You can either:
1) Use a placeholder (0,0) which you can then overwrite.
or:
2) Set spacial_index=False.

Visual C++ how to find name of a column in MySql

I am currently using the following code to fill a combo box with the column information inside of MySql database:
private: void Fillcombo1(void){
String^ constring=L"datasource=localhost;port=3307;username=root;password=root";
MySqlConnection^ conDataBase=gcnew MySqlConnection(constring);
MySqlCommand^ cmdDataBase= gcnew MySqlCommand("select * from database.combinations ;", conDataBase);
MySqlDataReader^ myReader;
try{
conDataBase->Open();
myReader=cmdDataBase->ExecuteReader();
while(myReader->Read()){
String^ vName;
vName= myReader->GetString("OD");
comboBox1->Items->Add(vName);
}
}catch(Exception^ex){
MessageBox::Show(ex->Message);
}
}
Is there any simple method for finding the name of the column and placing it within a combo box?
Also, I am adding small details to my app such as a news feed which would need updating every so often, will I have to dedicate a full new database spreadsheet to this single news feed text so that I can updated it or is there a simpler alternative?
Thanks.
An alternative is to use the DESCRIBE statement:
mysql> describe rcp_categories;
+---------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+------------------+------+-----+---------+----------------+
| ID_Category | int(10) unsigned | NO | PRI | NULL | auto_increment |
| Category_Text | varchar(32) | NO | UNI | NULL | |
+---------------+------------------+------+-----+---------+----------------+
2 rows in set (0.20 sec)
There may be an easier way without launching any other query but you could also use "SHOW COLUMNS" MySQL Query.
SHOW COLUMNS FROM combinations FROM database
or
SHOW COLUMNS FROM database.combinatons
Both will work.

Error: "Index '' does not exist on table" when trying to create entities in Doctrine 2.0 CLI

I have a mySQL database. I am trying to get Doctrine2 to create entities from the MySQL schema. I tried this with our production database, and got the following error:
[Doctrine\DBAL\Schema\SchemaException] Index '' does not exist on table user
I then created a simple test database with only one table, and only three fields: an auto-increment primary key field and three varchar fields. When attempting to have doctrine create entities from this database, I got the same error.
Here is the table that I was trying to create an entitie for. (Should have been simple)
mysql> desc user;
+-----------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-------------+------+-----+---------+----------------+
| iduser | int(11) | NO | PRI | NULL | auto_increment |
| firstname | varchar(45) | YES | | NULL | |
| lastname | varchar(45) | YES | | NULL | |
| username | varchar(45) | YES | | NULL | |
+-----------+-------------+------+-----+---------+----------------+
4 rows in set (0.00 sec)
Here is the command that I used in an attempt to get said entities created:
./doctrine orm:convert-mapping --from-database test ../models/test
I am running:
5.1.49-1ubuntu8.1 (Ubuntu)
mysql Ver 14.14 Distrib 5.1.49, for debian-linux-gnu (i686) using readline 6.1
Doctrine 2.0.1
I am facing the same problem right now. I have traced the problem back to the primary key being not identified / set correctly. The default value is boolean(false) which is cast to the string ''. Doctrine subsequently fails to locate an index for this attribute. ;-)
Solution: Define a PRIMARY KEY.