Rename a Power BI On Prem Data Gateway - powerbi

How can I rename an on prem data gateway?
I have found questions and answers about this, but none of them appear to be correct. Let me explain:
The gateway cluster can be renamed via the Power Platform Admin Center. The gateway cannot.
Installing a gateway on a new computer and "taking over" another gateway does not change its name.
The DataGateway module in PowerShell doesn't appear to provide a way to set the properties of a gateway cluster member. (No Set-DataGatewayClusterMember command.)
Alternately, How can I change which gateway is primary in the gateway cluster?
Again, I have found lots of incomplete or wrong information online.
Installing a gateway on a new computer and "taking over" another gateway uses the same name as the old gateway.
Uninstalling the gateway from the computer hosting the primary gateway makes it "Offline".
Uninstalling the gateway from the computer hosting the primary gateway, then trying to remove the gateway results in an error message. Please remove all other member gateways before removing the primary instance of this gateway cluster.
Uninstalling the gateway from the computer hosting the primary gateway, disabling the gateway, then trying to remove the gateway results in an error message. Please remove all other member gateways before removing the primary instance of this gateway cluster.
Again, looking at the DataGateway module in PowerShell, Set-DataGatewayCluster doesn't appear to provide a way to define which instance is primary (maybe a -PrimaryInstance or -PrimaryMember parameter?)
Then I thought...
What if I remove all cluster members? Does the cluster still exist? Can I then create a new gateway, add it to the cluster, and give it the correct name and expect that to become the primary member?
So I created a new gateway without adding it to a cluster. A new cluster was created. I added a new data source to it. Then I uninstalled the gateway from the machine and removed it (the primary member) from the cluster via Power Platform Admin Center. The cluster was immediately destroyed. I reinstalled the gateway to try to connect it to the new cluster. Sure enough... it's not there.
So, removing the primary gateway destroys the gateway cluster and all related data sources. That wouldn't really be part of my preferred solution.
Other options
Maybe I can find and change the gateway name in the the Windows Registry of the computer where I installed the gateway. Nope.
Maybe:
make note of all settings for all data sources
remove all gateways
install the new gateway with the correct name
reconfigure all of the data sources
Or I can avoid some downtime if I...
Create a new gateway (with the right name) and cluster.
Copy all of my data source config from the old cluster to the new cluster.
Check the settings for every data set in Power BI and and, if the data set uses a gateway, change which gateway it uses.
Maybe some of that could be done by a PowerShell script. But I don't see that Set-DataGatewayClusterDatasource provides any way for me to set the login credentials.
Either way, I need to spend a bunch of time completely reconfiguring how Power BI connects to all of my data sources because Microsoft won't let me update one small, arbitrary piece of text? That can't be right.
So... What is the correct way to rename a Power BI On Prem Data Gateway?

Related

Extract Entire AWS Setup into storable Files or Deployment Package(s)

Is there some way to 'dehydrate' or extract an entire AWS setup? I have a small application that uses several AWS components, and I'd like to put the project on hiatus so I don't get charged every month.
I wrote / constructed the app directly through the various services' sites, such as VPN, RDS, etc. Is there some way I can extract my setup into files so I can save these files in Version Control, and 'rehydrate' them back into AWS when I want to re-setup my app?
I tried extracting pieces from Lambda and Event Bridge, but it seems like I can't just 'replay' these files using the CLI to re-create my application.
Specifically, I am looking to extract all code, settings, connections, etc. for:
Lambda. Code, Env Variables, layers, scheduling thru Event Bridge
IAM. Users, roles, permissions
VPC. Subnets, Route tables, Internet gateways, Elastic IPs, NAT Gateways
Event Bridge. Cron settings, connections to Lambda functions.
RDS. MySQL instances. Would like to get all DDL. Data in tables is not required.
Thanks in advance!
You could use Former2. It will scan your account and allow you to generate CloudFormation, Terraform, or Troposphere templates. It uses a browser plugin, but there is also a CLI for it.
What you describe is called Infrastructure as Code. The idea is to define your infrastructure as code and then deploy your infrastructure using that "code".
There are a lot of options in this space. To name a few:
Terraform
Cloudformation
CDK
Pulumi
All of those should allow you to import already existing resources. At least Terraform has a import command to import an already existing resource into your IaC project.
This way you could create a project that mirrors what you currently have in AWS.
Excluded are things that are strictly taken not AWS resources, like:
Code of your Lambdas
MySQL DDL
Depending on the Lambdas deployment "strategy" the code is either on S3 or was directly deployed to the Lambda service. If it is the first, you just need to find the S3 bucket etc and download the code from there. If it is the second you might need to copy and paste it by hand.
When it comes to your MySQL DDL you need to find tools to export that. But there are plenty tools out there to do this.
After you did that, you should be able to destroy all the AWS resources and then deploy them later on again from your new IaC.

How to properly set AWS inbound rules to accept response from external REST API call

My use case
I have an AWS lambda hosted function that calls an external API. In my case it is Trello's terrific and well-defined API.
My problem in a nutshell - TL;DR Option: Feel Free to Jump to Statement Below
I had my external API call to Trello working properly. Now it is not working. I suspect I changed networking permissions within AWS that now block the returned response from the service provider. Details to follow.
My testing
I have tested my call to the API using Postman, so I know I have a well-formed request and a useful returned response from the service provider. The business logic is OK. For reference, here is the API call I am using. I have obfuscated my key and token for obvious reasons:
https://api.trello.com/1/cards?key=<myKey>&token=<myToken&idList=<a_real_list_here>&name=New+cards+are+cool
This should put a new card on my Trello board, and in POSTMAN (running on my local machine) it does so successfully. In fact, I had this working in an AWS lambda function I recently deployed. Here is the call. (Note that I'm using the recommended urllib3 library recommended by AWS:
http.request("POST", "https://api.trello.com/1/cards?key=<myKey>&token=<myToken>&idList=<a_real_list_here>&name="+card_name+"&desc="+card_description)
Furthermore, I have tested the same capability a CURL version of that same request. It is formed like this:
curl --location --request POST 'https://api.trello.com/1/cards?key=338d5b193d43e95712005fd2bcb4cd12&token=d0e3c4cd6281f43e4ec257ae5f05cd902230cbbca7e26b99664cd620f6479f7a&idList=600213811e171376755c7ed5&name=New+cards+are+cool'
I can summarize the behavior like this
+------------+---------------+----------------------+---------------+
| | Local Machine | Previously on Lambda | Now on Lambda |
+------------+---------------+----------------------+---------------+
| cURL | GOOD | GOOD | N/A |
+------------+---------------+----------------------+---------------+
| HTTP POST | GOOD | GOOD | 443 Error |
+------------+---------------+----------------------+---------------+
Code and Errors
I am not getting a debuggable response. I get a 443, which I presume is the error code, but even that is not clear. Here is the code snippet:
#send to trello board
try:
http.request("<post string from above>")
except:
logger.debug("<post string from above>")
The code never seems to get to the logger.debug() call. I get this in the AWS
log:
[DEBUG] 2021-01-19T21:56:24.757Z 729be341-d2f7-4dc3-9491-42bc3c5d6ebf
Starting new HTTPS connection (1): api.trello.com:443
I presume the "Starting New HTTPS connection..." log entry is coming fromurllib3 libraries
PROBLEM SUMMARY
I know from testing that my actual API call to the external service is properly formed. At one point it was working well, but now it is not. Previously, in order to get it to work well, I had to fiddle with AWS permissions to allow the response to come back from the service provider. I did it, but I didn't fully understand what I did and I think I was just lucky. Now it's broken and I want to do it in a thoughtful way.
What I'm looking for is an understanding of how to set up the AWS permission structure to enable that return message from the service provider. AWS provides a comprehensive guide to how to use the API Gateway to give others access to services hosted on AWS, but it's much more sketchy about how to open permissions for responses from other service providers.
Thanks to the folks at Hava, I have this terrific diagram of the permissions in place for my AWS infrastructure:
Security Structure The two nets marked in red are unrelated to this project. The first green check points to one of my EC2 machines and the second points to a related security group.
I'm hoping the community can help me to understand what the key permission elements (IAM roles, security groups, etc) are in play and what I need to look for in the related AWS permissions/networking/security structure.
As the lambda is in your VPC you need to make extra configurations to allow it to communicate beyond the VPC as the lambda runner does not have a public IP. Thus you'll need an internet or NAT gateway as described here: https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
You'll need either additional managed services or infrastructure running a NAT gateway.
So, the problem in the end was none of the networking problems. In fact, the problem was the lambda function did not have the right Execution Role assigned.
SPECIFICALLY
Lambda needs AWSLambdaVPCAccessExecutionRole in order to call all of the basic VPC stuff to get to all the fancy networking infrastructure gymnastics shown above.
This is an AWS managed role and the default AWS description of this role is Allows Lambda functions to call AWS services on your behalf.
If you are having this problem, here is how to check this out.
Go to your lambda function [Services][Lambda][Functions] and then
click on your function
Go to the configuration tab. At the right side of the window,
select Edit.
If you were like me, you already had a Role but it may have been
the wrong one. If you change the role, the console will take a
while to reset the settings even before you hit Save this is
normal.
At the very bottom of the page, right below the role selection,
you'll see a link to the role in the IAM control panel. Click on
that to check your IAM Policies
Make sure that AWSLambdaVPCAccessExecutionRole is among the
polcies enabled.
Red Herrings
Here are two things that initially led me astray:
I keep seeing 443 come back as what I thought was an error code from the urllib3 service call. It was not. I tried a few other things and my best guess is that it was a port number, not an error.
The lack of access was certainly a networking configuration error, until I tried an experiment that proved to me that it was not. Here is the proposed experiment:
If you follow all of the guidance you will have the following network setup:
One public subnet connected to the internet gateway
One private subnet connected all of your internal organs
One NAT gateway that points your private subnet to the IGW
A routing table that connects your private subnet to the NAT gateway
A routing table that connects your public subnet to the IGW
THEN, with all of that set up, create a throw-away EC2 instance in your private subnet. When you set it up, it should not have a public IP. You can double check that by trying to use the CONNECT function on the EC2 pages. It should not work.
If you don't already have it, set up an EC2 in your public subnet. You should have a public IP for that one.
Now SSH into your public EC2. Once in there, SSH from your public EC2 to your private EC2. If all of your infrastructure is set up correctly, you should be able to do this. If you're logged into your private EC2, you should be able to ping a public web site from inside the EC2 running in that private subnet.
The fact that you could not directly connect to your private EC2 tells you that the subnet is secure-ish. The fact that you could reach the internet from that private EC2 tells you that the NAT gateway and routing tables are set up correctly.
But of course none of this matters if your Execution Role is not right.
One Last Thought
I'm absolutely no expert on this, but invite the experts to correct me here. I'll edit it as I learn. For those new to these concepts I deeply recommend taking the time to map out your network with a piece of paper. When I have enough credibility, I'll post the map I did to help me to think through all of this.
Paul

How to get Google Cloud Build working inside VPC Perimeter?

I have a question that is confusing me a little. I have a project locked down at the org level through a perimeter fence. This is to whitelist ip ranges to access a cloud storage bucket as the user has no ability to authenticate through service accounts or api's and requires a streaming of data.
This is fine and working however I am confused about how to open up access to serverless enviroments aswell inside gcp. The issue in question is cloud build. Since introduction of the perimeter I can no longer run cloud build due to violation of vpc controls. Wondering can anyone point me in the direction of how to enable this as obviously white listing the entire cloud build ip range is not an option?
You want to create a Perimeter Bridge between the resources that you want to be able to access each other. You can do this in the console or using gcloud as noted in the docs that I linked.
The official documentation mention that if you use VPC service controls, some services are not supported, for example, Cloud Build, for this reason the problem started right after you deployed the perimeter.
Hi all so the answer is this.
What you want to do is set up one project that is locked down by vpc and has no api's available for ingestion of the ip white listed storage bucket. Then you create a 2nd project that has a vpc but does not disable cloud storage api's etc. Now from here you can read directly from the ip whitelisted cloud storage bucket in the other project.
Hope this makes sense as I wanted to share back to the awesome guys above who put me on the right track.
Thanks again
Cloud Build is now supported by VPC Service Controls VPC Supported products and limitations

Change RDS to Public accessible

I am a newbie in amazon web services and have got some questions related to amazon RDS:
1.How can we use AWS API to define an RDS and send the parameter 'publicly accessible' to it? I know that the CLI has a -pub flag (CLI-RDS) which can be used, but what about when we are not using CLI and gonna use some programming language like Node.js?
2.Is it possible to change the state of publicly-accessible parameter of an RDS? I mean If we have already defined an RDS in private state, can we change it later? If yes How? I also read the discussion here (RDS to public), and they offered to Delete the current RDS & create final snapshot and then Restore the Snapshot with the the public availability zone. It's not possible in my case. Is there any other way? we want to change the state of publicly accessible parameter dynamically because of some security issues.
This API call is available on all clients (Console, SDK, CLI, ...) here is the documentation for node.js, check the PubliclyAccessible parameter:
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/RDS.html#modifyDBInstance-property
It is surely possible. However, as the cloudformation documentation mentions, that requires substitution and so expect and plan for some downtime:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-publiclyaccessible

Custom DNS resolver for Google Cloud Dataflow pipeline

I am trying to access Kafka and 3rd-party services (e.g., InfluxDB) running in GKE, from a Dataflow pipeline.
I have a DNS server for service discovery, also running in GKE. I also have a route in my network to access the GKE IP range from Dataflow instances, and this is working fine. I can manually nslookup from the Dataflow instances using my custom server without issues.
However, I cannot find a proper way to set up an additional DNS server when running my Dataflow pipeline. How could I achieve that, so that KafkaIO and similar sources/writers can resolve hostnames against my custom DNS?
sun.net.spi.nameservice.nameservers is tricky to use, because it must be called very early on, before the name service is statically instantiated. I would call java -D, but Dataflow is going to run the code itself directly.
In addition, I would not want to just replace the systems resolvers but merely append a new one to the GCP project-specific resolvers that the instance comes pre-configured with.
Finally, I have not found any way to use a startup script like for a regular GCE instance with the Dataflow instances.
I can't think of a way today of specifying a custom DNS in a VM other than editing /etc/resolv.conf[1] file in the box. I don't know if it is possible to share the default network. If it is machines are available at hostName.c.[PROJECT_ID].internal, which may serve your purpose if hostName is stable [2].
[1] https://cloud.google.com/compute/docs/networking#internal_dns_and_resolvconf [2] https://cloud.google.com/compute/docs/networking