I am trying to format the AWS CLI table output so that it shows as a 'nice' formatted table in markdown in typora, github md files, etc...
For example, the original table-formatted output from the AWS CLI command
$ aws ec2 describe-subnets --query "Subnets[*].{CIDR:CidrBlock,Name:Tags[?Key=='Name']|[0].Value,AZ:AvailabilityZone}" --output table
is
----------------------------------------------------------------------
| DescribeSubnets |
+------------+------------------+------------------------------------+
| AZ | CIDR | Name |
+------------+------------------+------------------------------------+
| eu-west-3c| 10.1.103.0/24 | vpc-acme-test-public-eu-west-3c |
| eu-west-3b| 172.31.16.0/20 | None |
| eu-west-3a| 10.1.101.0/24 | vpc-acme-test-public-eu-west-3a |
| eu-west-3c| 10.1.3.0/24 | vpc-acme-test-private-eu-west-3c |
| eu-west-3b| 10.1.2.0/24 | vpc-acme-test-private-eu-west-3b |
| eu-west-3a| 172.31.0.0/20 | None |
| eu-west-3c| 172.31.32.0/20 | None |
| eu-west-3a| 10.1.1.0/24 | vpc-acme-test-private-eu-west-3a |
| eu-west-3b| 10.1.102.0/24 | vpc-acme-test-public-eu-west-3b |
+------------+------------------+------------------------------------+
Based on assorted markdown tutorials and tests, the output that would render properly as a table in typora and github is something like:
| AZ | CIDR | Name |
|------------|------------------|------------------------------------|
| eu-west-3c| 10.1.103.0/24 | vpc-acme-test-public-eu-west-3c |
| eu-west-3b| 172.31.16.0/20 | None |
| eu-west-3a| 10.1.101.0/24 | vpc-acme-test-public-eu-west-3a |
| eu-west-3c| 10.1.3.0/24 | vpc-acme-test-private-eu-west-3c |
| eu-west-3b| 10.1.2.0/24 | vpc-acme-test-private-eu-west-3b |
| eu-west-3a| 172.31.0.0/20 | None |
| eu-west-3c| 172.31.32.0/20 | None |
| eu-west-3a| 10.1.1.0/24 | vpc-acme-test-private-eu-west-3a |
| eu-west-3b| 10.1.102.0/24 | vpc-acme-test-public-eu-west-3b |
(the text above does not render as a table in stackoverflow. Below a screenshot of this table rendered in typora:
I could not find any AWS CLI option, but the following unix-like series of filters does the job.
Pipe the output of the AWS command to:
sed s/'+'/'|'/g | tail -n +4 | head -n -1
The full CLI command is:
$ aws ec2 describe-subnets --query "Subnets[*].{CIDR:CidrBlock,Name:Tags[?Key=='Name']|[0].Value,AZ:AvailabilityZone}" --output table | sed s/'+'/'|'/g | tail -n +4 | head -n -1
Other suggestions welcome!
Related
I am new to PowerBI. I am trying to implement the following scenario in PowerBI
I have the following 2 tables -
Table 1:
| ExtractionId | DatasetId | UserId |
| -- | --- | --- |
| E_ID1 | D_ID1 | sta#example.com |
| E_ID2 | D_ID1 | dany#example.com |
| E_ID3 | D_ID2 | dany#example.com |
Table 2:
| DatasetId | Date | UserId | Status |
| --| --- | --- | --- |
| D_ID1 | 05/30/2021 | sta#example.com | Completed |
| D_ID1 | 05/30/2021 | dany#example.com | Completed |
| D_ID1 | 05/31/2021 | sta#example.com | Partial |
| D_ID1 | 05/31/2021 | dany#example.com | Completed |
| D_ID2 | 05/30/2021 | sta#example.com | Completed |
| D_ID2 | 05/30/2021 | dany#example.com | Completed |
| D_ID2 | 05/31/2021 | sta#example.com | Partial |
| D_ID2 | 05/31/2021 | dany#example.com | Completed |
I am trying to create a PowerBI report where, given an extraction id (in a slicer), we need to identify the corresponding DatasetId and UserID from Table 1 and use those fields to filter Table 2 and provide a visual of user status on the given date range.
When I am trying to implement the above scenario, I am creating a Many-Many relationship between DatasetID columns of Table1 and Table2, but cannot do the same for UserID column simultaneously as I get the following error :
You can't create a direct active relationship between Table1 and Table2 because an active set of indirect relationship already exists.
Because of this, given an extractionId, I can filter on DatasetID but not UserId and vice versa. Could you please help me understand what mistake I am doing here and how to resolve the same?
Thanks in advance
This case you said too. You can only merge two or more columns. Than you will create relationships.
I want to move all the folders and sub-folders which contain py files from one parent folder to another in GCS Bucket.
Folder A which contain the files I want to copy look something like this:
FOLDER_A
|
+---PROJECT_A
| Project_A_PROD.py
| Project_A - flow chart.png
| README.md
|
\---Project_B_APAC
| +---Project_B1_APAC
| | Project_B1_APAC_flow_chart.png
| | Project_B1_APAC.py
| | README.md
| |
| +---Project_B2_APAC
| | Project_B2_APAC.py
| | Project_B2_APAC_flow_chart.png
| | README.md
| |
| \---Project_B3_APAC
| Project_B3_APAC.py
| Project_B3_APAC_flow_chart.png
| README.md
And the folder B where I want to move the py files should look like this:
FOLDER_B
|
+---PROJECT_A
| Project_A_PROD.py
|
\---Project_B_APAC
| +---Project_B1_APAC
| | Project_B1_APAC.py
| |
| +---Project_B2_APAC
| | Project_B2_APAC.py
| |
| \---Project_B3_APAC
| Project_B3_APAC.py
Both the folders are present in the same bucket.
Any help would be highly appreciated.
I want to start minikube cluster on specific network/network adapter in VirtualBox, so that I launch other VMs in same network like below
+-------+ +------+ +----------------+
| | | | | |
| VM2 | | VM1 | | Minikube |
| | | | | Cluster |
| | | | | |
+---+---+ +---+--+ +------------+---+
| | |
| | |
| +------+------------+ |
+--+ | |
| 192.168.10.0/24 +-----+
+-------------------+
But I don't see much options for networking in minikube start CLI
Is it possible to start minikube like that or any trick to setup like above?
When it comes to adjusting networking with minikube start you can use the following option:
--host-only-cidr string The CIDR to be used for the minikube VM (only supported with Virtualbox driver) (default "192.168.99.1/24")
As you can see in the table here by default NAT option doesn't give you access to Minikube Cluster VM neither from host nor from other guests (VMs) but you can additionally set port forwarding which is well described in this article.
Although mentioned minikube start doesn't support many options that allow you to modify networking of your default VM, you can easily modify it by adding additional bridged adapter once the Minikube VM is created using Virtualbox GUI or vboxmanage command line tool to modify your network settings as some users suggest here and here.
I have checked again, the minikube cluster is attached to 2 networks,
NAT
Host-Only Network(vboxnet1)
Since it has already connected to a adapter, I can attache VM to exiting adapter and use it like below
+--------+ +---------------------+
| | | Minikube |
| | | |
| VM | | eth1 eth0 |
| | | + + |
| | +---------------------+
+---+----+ | |
| | |
| | |
| +------------v------+ |
| | | v
+------->+ vboxnet1 | NAT
| 192.168.99.0/24 |
| |
+-------------------+
Any other suggestions are welcome
I want to dump PostgreSQL database to an SQL file. Here is the pg_dump DB_NAME > dump.sql command result:
pg_dump: [archiver (db)] query failed: ERROR: permission denied for relation django_migrations
pg_dump: [archiver (db)] query was: LOCK TABLE public.django_migrations IN ACCESS SHARE MODE
When I want to list relations:
postgres=# \d
No relations found.
List of schemas:
List of schemas
Name | Owner | Access privileges | Description
--------+----------+----------------------+------------------------
public | postgres | postgres=UC/postgres+| standard public schema
| | =UC/postgres +|
| | root=U/postgres +|
| | secretusr=U/postgres |
(1 row)
Database list:
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+------------------------
db_secret | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres +
| | | | | secretusr=CTc/postgres+
| | | | | root=CTc/postgres
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
Where is the problem?
I am trying to build a data collection pipe-line on top of AWS services. Overal architecture is given below;
In summary system should get events from API gateway (1) ( one request for each event ) and the data should be written to Kinesis (2).
I am expecting ~100k events per second. My question is related to KPL usage on Lambda functions. On step 2 I am planning to write a Lambda method with KPL to write events on Kinesis with high throughput. But I am not sure it is possible as API Gateway calls lambda function for each event separately.
Is it possible/reasonable to use KPL in such architecture or I should using Kinesis Put API instead?
1 2 3 4
+----------------+ +----------------+ +----------------+ +----------------+
| | | | | | | |
| | | | | | | |
| AWS API GW +-----------> | AWS Lambda +-----------> | AWS Kinesis +----------> | AWS Lambda |
| | | Function with | | Streams | | |
| | | KPL | | | | |
| | | | | | | |
+----------------+ +----------------+ +----------------+ +-----+-----+----+
| |
| |
| |
| |
| |
5 | | 6
+----------------+ | | +----------------+
| | | | | |
| | | | | |
| AWS S3 <-------+ +----> | AWS Redshift |
| | | |
| | | |
| | | |
+----------------+ +----------------+
I am also thinking about writing directly to S3 instead of calling lambda function from api-gw. If first architecture is not reasonable this may be a solution but in that case I will have a delay till writing data to kinesis
1 2 3 4 5
+----------------+ +----------------+ +----------------+ +----------------+ +----------------+
| | | | | | | | | |
| | | | | | | | | |
| AWS API GW +-----------> | AWS Lambda +------> | AWS Lambda +-----------> | AWS Kinesis +----------> | AWS Lambda |
| | | to write data | | Function with | | Streams | | |
| | | to S3 | | KPL | | | | |
| | | | | | | | | |
+----------------+ +----------------+ +----------------+ +----------------+ +-----+-----+----+
| |
| |
| |
| |
| |
6 | | 7
+----------------+ | | +----------------+
| | | | | |
| | | | | |
I do not think using KPL is the right choice here. The key concept of KPL is, that records get collected at the client and then send as a batch operation to Kinesis. Since Lambdas are stateless per invocation, it would be rather difficult to store the records for aggregation (before sending it to Kinesis).
I think you should have a look at the following AWS article which explain how you can directly connect API-Gateway to Kinesis. This way, you can avoid the extra Lambda which just forwards your request.
Create an API Gateway API as an Kinesis Proxy
Obviously, if your data coming through AWS API Gateway corresponds to one Kinesis Data Streams record it makes no sense to use the KPL as pointed out by Jens. In this case you can make direct call of Kinesis API without using Lambda. Eventually, you may use some additional processing in Lambda and send the data through PutRecord (not PutRecords used by KPL). Your code in JAVA will looks like this
AmazonKinesisClientBuilder clientBuilder = AmazonKinesisClientBuilder.standard();
clientBuilder.setRegion(REGION);
clientBuilder.setCredentials(new DefaultAWSCredentialsProviderChain());
clientBuilder.setClientConfiguration(new ClientConfiguration());
AmazonKinesis kinesisClient = clientBuilder.build();
...
//then later on each record
PutRecordRequest putRecordRequest = new PutRecordRequest();
putRecordRequest.setStreamName(STREAM_NAME);
putRecordRequest.setData(data);
putRecordRequest.setPartitionKey(daasEvent.getAnonymizedId());
putRecordRequest.setExplicitHashKey(Utils.randomExplicitHashKey());
putRecordRequest.setSequenceNumberForOrdering(sequenceNumberOfPreviousRecord);
PutRecordResult putRecordResult = kinesisClient.putRecord(putRecordRequest);
sequenceNumberOfPreviousRecord = putRecordResult.getSequenceNumber();
However, there may be cases when using KPL from lambda makes sense. For example the data sent to AWS API Gateway contains multiple individual records which will be sent to one or multiple streams. In that cases the benefits (see https://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-concepts.html) of KPL are still valid, but you have to be aware of specifics given by using of Lambda concretely an "issue" pointed out here https://github.com/awslabs/amazon-kinesis-producer/issues/143 and use
kinesisProducer.flushSync()
at the end of insertions which worked also for me.