How to set table compression mode in Bigtable using HBase shell? - compression

I'm creating tables in Bigtable using HBase shell and the usual create table command where you can specify compression apparently ignores the compression attribute.
Example:
hbase(main):003:0> create 'table_snappy', {NAME => 'event', VERSIONS => 1, COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROW'}
hbase(main):004:0> describe 'table_snappy'
Table table_snappy is ENABLED
table_snappy
COLUMN FAMILIES DESCRIPTION
{NAME => 'event', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
1 row(s) in 0.0870 seconds
hbase(main):003:0> create 'table_lzo', {NAME => 'event', VERSIONS => 1, COMPRESSION => 'LZO', BLOOMFILTER => 'ROW'}
hbase(main):004:0> describe 'table_lzo'
Table table_lzo is ENABLED
table_lzo
COLUMN FAMILIES DESCRIPTION
{NAME => 'event', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
1 row(s) in 0.0870 seconds

Bigtable uses proprietary compression algorithms, and does not expose compression methods or configuration. So while the input is ignored, compression is happening and is automatically managed for you.
This is documented in the Bigtable differences from HBase:
Column families
When you create a column family, you cannot configure the block size or compression method, either with the HBase shell or through the HBase API. Cloud Bigtable manages the block size and compression for you.
In addition, if you use the HBase shell to get information about a table, the HBase shell will always report that each column family does not use compression. In reality, Cloud Bigtable uses proprietary compression methods for all of your data.

Related

Save AWS Polly mp3 file to S3

I am trying to send some text to AWS Polly to convert to speech and then save that mp3 file to S3. That part seems to work now.
// Send text to AWS Polly
$client_polly = new Aws\Polly\PollyClient([
'region' => 'us-west-2',
'version' => 'latest',
'credentials' => [
'key' => $aws_useKey,
'secret' => $aws_secret,
]
]);
$text = 'Test. Test. This is a sample text to be synthesized.';
$voice = 'Matthew';
$result_polly = $client_polly->startSpeechSynthesisTask([
'Text' => $text,
'TextType' => 'text',
'OutputFormat' => 'mp3',
'OutputS3BucketName' => $aws_bucket,
'OutputS3KeyPrefix' => 'files/audio/,
'VoiceId' => $voice,
'ACL' => 'public-read'
]);
echo $result_polly['ObjectURL'];
I'm also trying to accomplish couple other things:
Make mp3 file publicly accessible. Currently I have to go to AWS console to
click "Make Public" button. It seems that 'ACL' => 'public-read' doesn't work for me
I need to return full URL of the mp3 file. For some reason $result_polly['ObjectURL']; doesn't get any value.
What am I missing?
There is no ACL field in the StartSpeechSynthesisTask call:
$result = $client->startSpeechSynthesisTask([
'LanguageCode' => 'arb|cmn-CN|cy-GB|da-DK|de-DE|en-AU|en-GB|en-GB-WLS|en-IN|en-US|es-ES|es-MX|es-US|fr-CA|fr-FR|is-IS|it-IT|ja-JP|hi-IN|ko-KR|nb-NO|nl-NL|pl-PL|pt-BR|pt-PT|ro-RO|ru-RU|sv-SE|tr-TR',
'LexiconNames' => ['<string>', ...],
'OutputFormat' => 'json|mp3|ogg_vorbis|pcm', // REQUIRED
'OutputS3BucketName' => '<string>', // REQUIRED
'OutputS3KeyPrefix' => '<string>',
'SampleRate' => '<string>',
'SnsTopicArn' => '<string>',
'SpeechMarkTypes' => ['<string>', ...],
'Text' => '<string>', // REQUIRED
'TextType' => 'ssml|text',
'VoiceId' => 'Aditi|Amy|Astrid|Bianca|Brian|Carla|Carmen|Celine|Chantal|Conchita|Cristiano|Dora|Emma|Enrique|Ewa|Filiz|Geraint|Giorgio|Gwyneth|Hans|Ines|Ivy|Jacek|Jan|Joanna|Joey|Justin|Karl|Kendra|Kimberly|Lea|Liv|Lotte|Lucia|Mads|Maja|Marlene|Mathieu|Matthew|Maxim|Mia|Miguel|Mizuki|Naja|Nicole|Penelope|Raveena|Ricardo|Ruben|Russell|Salli|Seoyeon|Takumi|Tatyana|Vicki|Vitoria|Zeina|Zhiyu', // REQUIRED
]);
Therefore, you will either need to make another call to Amazon S3 to change the ACL of the object, or use an Amazon S3 Bucket Policy to make the bucket (or a path within the bucket) public.
The output location is given in the OutputUri field (NOT OutputUrl -- URI vs URL).

Oracle APEX ORDS Limitation on URL template?

Is there a limitation on the Oracle APEX ords template?
Currently mapping a GET Request to
Works : URI Template: /history/{PLATAFORM}/{CONTEXT}/{APPLICAT}/
Works : URI Template: /history/{PLATAFORM}/{CONTEXT}/{APPLICAT}/test/
405 Method Not Allowed: URI Template: /history/{PLATAFORM}/{CONTEXT}/{APPLICAT}/{test}/
Can't find documentation about this scenario, wonder if it's Oracle APEX limitation/Bug? Or Maybe some configuration somewhere?
Any help would be appreciated,
Regards.
No limitations other than please switch to using : syntax over {}. We changed that a while back.
-- Generated by Oracle SQL Developer REST Data Services 18.1.0.051.1417
-- Exported REST Definitions from ORDS Schema Version 17.4.0.18.13.50
-- Schema: KLRICE Date: Tue Apr 03 15:03:00 EDT 2018
--
BEGIN
ORDS.ENABLE_SCHEMA(
p_enabled => TRUE,
p_schema => 'KLRICE',
p_url_mapping_type => 'BASE_PATH',
p_url_mapping_pattern => 'klrice',
p_auto_rest_auth => FALSE);
ORDS.DEFINE_MODULE(
p_module_name => '/history/',
p_base_path => '/history/',
p_items_per_page => 25,
p_status => 'PUBLISHED',
p_comments => NULL);
ORDS.DEFINE_TEMPLATE(
p_module_name => '/history/',
p_pattern => ':PLATFORM/:CONTEXT/:APPLICAT/test',
p_priority => 0,
p_etag_type => 'HASH',
p_etag_query => NULL,
p_comments => NULL);
ORDS.DEFINE_HANDLER(
p_module_name => '/history/',
p_pattern => ':PLATFORM/:CONTEXT/:APPLICAT/test',
p_method => 'GET',
p_source_type => 'json/collection',
p_items_per_page => 25,
p_mimes_allowed => '',
p_comments => NULL,
p_source =>
'select :PLATFORM,:CONTEXT,:APPLICAT from dual'
);
COMMIT;
END;

Creating Salesorder with Vtiger Webservice missing Tax

I have the Problem when creating an salesorder with Vtiger Webservice,
The sales order is created but somehow the tax is not added.
I thought it has something to do with the parameter : hdnTaxType
Cause even if I add this value 'group' it does not apply the Tax to the Salesorder.
I have to manually add The Taxtype 'group' then the system adds the tax.
thats why i tried to add values like:
'tax1' => '7.00',
and
'group_tax' =>[
'group_tax_percentage1' => '7.0'
],
nothing so far did help...
has anybody an Idea what the problem is?
Thank you
Tobi
$salesOrder =[
'lastname' => $customer['lastname'],
'subject' => 'Order from 01.01.1018',
'sostatus' => '1',
'assigned_user_id' => '',
'bill_street' => 'Rechunngsstrasse 123',
'ship_street' =>'Lieferungsstrasse 123',
'productid' => '14x4325',
'currency_id' => '21x1',
'carrier' => 'DHL',
'txtAdjustment' => '13',
'salescommission' => '12',
'exciseduty' => '15',
'hdnTaxType' => 'group',
'tax1' => '7.00',
'hdnS_H_Amount' => '22.22',
'group_tax' =>[
'group_tax_percentage1' => '7.0'
],
'LineItems' => [
0 => [
"taxid" => "33x1",
'productid'=>'14x6',
'listprice'=>'20.53',
'quantity'=>'3',
'comment' => "Product123"
]
]
Webservices in Vtiger are not complete.
In this case you can register your own webservice that makes your queries, or check the SalesOrder in the after-save event

google cloud bigtable column versions are not deleted

We have created a table in cloud bigtable with two column families. One column family with 30 versions and the other with 1 version. However, when we query the table we are getting multiple versions of the columns for which we have set max number of versions to 1.
Table create statement:
create 'myTable', {NAME => 'cf1', VERSIONS => '30'}, {NAME => 'cf2', VERSIONS => '1'}
Describe 'myTable':
{NAME => ‘cf2’, BLOOMFILTER => ‘ROW’, VERSIONS => ‘**1**’, IN_MEMORY => ‘false’, KEEP_DELETED_CELLS => ‘FALSE’, DATA_BLOCK_ENCODING => ‘NONE’, TTL => ‘FOREVER’, COMPRESSION => ‘NONE’, MIN_VERSIONS => ‘0’, BLOCKCACHE => ‘true’, BLOCKSIZE => ‘65536’, REPLICATION_SCOPE
=> ‘0’}
{NAME => ‘cf1’, BLOOMFILTER => ‘ROW’, VERSIONS => ‘**30**’, IN_MEMORY => ‘false’, KEEP_DELETED_CELLS => ‘FALSE’, DATA_BLOCK_ENCODING => ‘NONE’, TTL => ‘FOREVER’, COMPRESSION => ‘NONE’, MIN_VERSIONS => ‘0’, BLOCKCACHE => ‘true’, BLOCKSIZE => ‘65536’, REPLICATION_SCOPE
=> ‘0’}
How does the bigtable garbage collection work? How frequently does it delete the older versions? or are we missing something while creating the table ?
From Bigtable Docs: Deletion of values happens opportunistically in the background, so you might still be able to read the data for several days after it has expired.
Link to docs
Even more detailed explanation

How do you get metadata from Amazon S3 object using AWS SDK PHP?

I've been looking through all the docs for AWS SDK PHP and I can't see a way to retrieve an object's meta data. I can retrieve the Key, Size, Last Modified, etc; but I don't see an example in the docs for how to get the metadata.
The call you're looking for is headObject. According to the docs: The HEAD operation retrieves metadata from an object without returning the object itself. This operation is useful if you're only interested in an object's metadata. To use HEAD, you must have READ access to the object.
Here is the example call from the version 3 sdk (this is such an old post I assume version 3 would be used now instead of version 2, but both SDKs include this call)
$result = $client->headObject([
'Bucket' => '<string>', // REQUIRED
'IfMatch' => '<string>',
'IfModifiedSince' => <integer || string || DateTime>,
'IfNoneMatch' => '<string>',
'IfUnmodifiedSince' => <integer || string || DateTime>,
'Key' => '<string>', // REQUIRED
'Range' => '<string>',
'RequestPayer' => 'requester',
'SSECustomerAlgorithm' => '<string>',
'SSECustomerKey' => '<string>',
'SSECustomerKeyMD5' => '<string>',
'VersionId' => '<string>',
]);
SDK Documentation