How to check Oracle database In-Memory space? - in-memory-database

I wanted to check the allocated in-memory size of my oracle database. I tried with the query below, but it is not working
SELECT pool,alloc_bytes,used_bytes,populate_status FROM V$INMEMOTY_AREA;
Error-
ORA-00942: table or view does not exist

That's because you have a typo in the view name. Instead of the word memory you mistyped memoty.
And here's a proof:
Your query
SQL>
SQL> SELECT pool,alloc_bytes,used_bytes,populate_status FROM V$INMEMOTY_AREA;
SELECT pool,alloc_bytes,used_bytes,populate_status FROM V$INMEMOTY_AREA
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL>
My query
SQL>
SQL> SELECT pool,alloc_bytes,used_bytes,populate_status FROM V$INMEMORY_AREA;
POOL ALLOC_BYTES USED_BYTES POPULATE_STATUS
------------------------------------------------------------------------------ ----------- ---------- ------------------------------------------------------------------------------
1MB POOL 0 0 OUT OF MEMORY
64KB POOL 0 0 OUT OF MEMORY
SQL>
Enjoy!

Related

Not able to insert records from one table to another table using Pyspark SQL in AWS Glue job

I am unable to insert records from one table to another table using Pyspark SQL in AWS Glue job. Its not prompting any error as well..
Table1 name : details_info
spark.sql('select count(*) from details_info').show();
count: 50000
Table2 name : goals_info
spark.sql('select count(*) from goals_info').show();
count: 0
I am trying to insert data from "details_info" table to "goals_info" table with below querie
spark.sql("INSERT INTO goals_info SELECT * FROM details_info")
I am expecting the count of goals_info as 50000 but it is showing me as 0 after
executing above sql statement
spark.sql('select count(*) from goals_info').show();
count: 0
code block is executing with out throwing any error but data is not inserting as count is showing as 0. Could anybody help me understand what might be the reason?
I even tried with write.insertInto() pyspark method , but the count is still showing a zero
view_df = spark.table("details_info")
view_df.write.insertInto("goals_info")

Informatica Cloud Data Integration - find non matching rows

I am working on Informatica Cloud Data
Integraion.I have 2 tables- Tab1 and Tab2.The joining column is id.I want to find all records in Tab1 that do not exist in Tab2.What transformations can I use to achieve this?
Tab1
id name
1 n1
2 n2
3 n3
Tab2
id
1
5
6
I want to get records with id 2 and 3 from tab1 as they do not exist in tab2
You can use database source qualifier overwrite sql
Select * from table1 where id not in ( select id from table2)
Or else you can use informatica like below.
Do a lookup on table2, on join condition on id.
In exp transformation, create a flag
out_flag= iif(isnull (:lkp(id)),'pass','fail')
Put a filter next and keep the condition as out_flag= 'pass'
Whole map should be like this
Lkp
|
Sq --exp|-----> fil---tgt

BigQuery - Join take few hours in 3TB table

I have 2 tables in BQ.
Table ID: blog
Table size: 3.07 TB
Table ID: llog
Table size: 259.82 GB
Im running the below query and it took few hours(even its not finished, I killed it, so not able to capture the query plan).
select bl.row.iddi as bl_iddi, count(lg.row.id) as count_location_ping
from
`poc_dataset.blog` bl LEFT OUTER JOIN `poc_dataset.llog` lg
ON
bl.row.iddi = lg.row.iddi
where bl.row.iddi="124623832432"
group by bl.row.iddi
Im not sure how to optimize this. Blog table has trillions of rows.
Unless some extra details are missing in your question - below should give you expected result
#standardSQL
SELECT row.iddi AS iddi, COUNT(row.id) AS count_location_ping
FROM `poc_dataset.llog`
WHERE row.iddi= '124623832432'
GROUP BY row.iddi

How to Abort or Exit from Redshift Query with a conditional expression?

I'm trying to abort/exit a query based on a conditional expression using CASE statement:
If the table has 0 rows then the query should go happy path.
If the table has > 0 rows then the query should abort/exit.
drop table if exists #dups_tracker ;
create table #dups_tracker
(
column1 varchar(10)
);
insert into #dups_tracker values ('John'),('Smith'),('Jack') ;
with c1 as
(select
0 as denominator__v
,count(*) as dups_cnt__v
from #dups_tracker
)
select
case
when dups_cnt__v > 0 THEN 1/denominator__v
else
1
end Ind__v
from c1
;
Here is the Error Message :
Amazon Invalid operation: division by zero; 1 statement failed.
There is no concept of aborting an SQL query. It either compiles into a query or it doesn't. If it does compile, the query runs.
The closest option would be to write a Stored Procedure, which can include IF logic. So, it could first query the contents of a table and, based on the result, decide whether it will perform another query.
Here is the logic I was able to write to abort a SQL in case of positive usecase,
/* Dummy Table to Abort Dups Check process if Positive */
--Dups Table
drop table if exists #dups;
create table #dups
(
dups_col varchar(1)
);
insert into #dups values('A');
--Dummy Table
drop table if exists #dummy ;
create table #dummy
(
dups_check decimal(1,0)
)
;
--When Table is not empty and has Dups
insert into #dummy
select
count(*) * 10
from #dups
;
/*
[Amazon](500310) Invalid operation: Numeric data overflow (result precision)
Details:
-----------------------------------------------
error: Numeric data overflow (result precision)
code: 1058
context: 64 bit overflow
query: 3246717
location: numeric.hpp:158
process: padbmaster [pid=6716]
-----------------------------------------------;
1 statement failed.
*/
--When Table is empty and doesn't have dups
truncate #dups ;
insert into #dummy
select
count(*) * 10
from #dups
;
drop table if exists temp_table;
create temp table temp_table (field_1 bool);
insert into temp_table
select case
when false -- or true
then 1
else 1 / 0
end as field_1;
This should compile, and fail when the condition isn't met.
Not sure why it's different from your example, though...
Edit: the above doesn't work querying against a table. Leaving it here for posterity.

SQLite C++ Compare two tables within the same database for matching records

I want to be able to compare two tables within the same SQLite Database using a C++ interface for matching records. Here are my two tables
Table name : temptrigrams
ID TEMPTRIGRAM
---------- ----------
1 The cat ran
2 Compare two tables
3 Alex went home
4 Mark sat down
5 this database blows
6 data with a
7 table disco ninja
++78
Table Name: spamtrigrams
ID TRIGRAM
---------- ----------
1 Sam's nice ham
2 Tuesday was cold
3 Alex stood up
4 Mark passed out
5 this database is
6 date with a
7 disco stew pot
++10000
The first table has two columns and 85 records and the second table has two columns with 10007 records.
I would like to take the first table and compare the records within the TEMPTRIGRAM column and compare it against the TRIGRAM columun in the second table and return the number of matches across the tables. So if (ID:1 'The Cat Ran' appears in 'spamtrigrams', I would like that counted and returned with the total at the end as an integer.
Could somebody please explain the syntax for the query to perform this action?
Thank you.
This is a join query with an aggregation. My guess is that you want the number of matches per trigram:
select t1.temptrigram, count(t2.trigram)
from table1 t1 left outer join
table2 t2
on t1.temptrigram = t2.trigram
group by t1.temptrigram;
If you just want the number of matches:
select count(t2.trigram)
from table1 t1 join
table2 t2
on t1.temptrigram = t2.trigram;