In PBI desktop file no errors, erro appear only in PBI service on refreshing
ERROR:
Query contains unsupported function. Function name:Odbc.DataSource
Parameter1
Mydsn ' as parameter
used as text - not dynamic
= Odbc.DataSource("dsn=Mydsn", [HierarchicalNavigation=true]) ' no error
as used us text Parameter - not dynamic
= Odbc.DataSource("dsn=" & Parameter1, [HierarchicalNavigation=true]) ' no error,
Odbc_dsn ' Query
= settings[Column2]{0} ' from csv
from csv query
= Odbc.DataSource("dsn=" & Odbc_dsn, [HierarchicalNavigation=true]) ' Query contains unsupported function. Function name: Odbc.DataSource
directly from csv table
= Odbc.DataSource("dsn=" & settings[Column2]{0}, [HierarchicalNavigation=true]) ' Query contains unsupported function. Function name: Odbc.DataSource
No one of Privacy settings is not change anything, tryied all
available ways. (change to none, private, organizational, public,
disabling privacy settings and etc)
How to use odbs source DSN name from csv file?
(Answer to be expanded with additional info provided - see comments on original question)
While I have never imported a DSN name through a CSV, your saying that it works on your local machine makes me accept that this is at least possible so we'll instead focus on issues with the gateway.
My first impression here as to why this might not be working is simply permissions and visibility.
Having worked with a number of PowerBI Service setups, the issue with an unrecognized ODBC DSN usually falls into the following issues:
Is the DSN setup as a system DSN?
Is the gateway setup as a LocalService Account vs PowerBI Gateway Host Account?
Does which user the gateway is setup under actually have permissions to the directory that the data source (or custom connector) that the connection depends on?
So:
Fairly straight forward: all gateway accessible ODBC sources need to be setup on the gateway host as system DSNs, not user DSNs. See your ODBC Data Source Administrator here:
Confirm the On-Premise Gateway "Logon" User on the gateway's host machine? Generally I recommend going to Windows Services and making sure to use the "Local System account" (to inherit permissions) but just consider this during the next step of checking local permissions.
This applies to anything which is "self-hosted" on the local machine that is the gateway host: Whichever account is hosting the powerbi gateway service must also be given explicit permissions to the local resources needed. For example, if you add a custom connector to the documents directory on the gateway host under your user account - make sure the PowerBI default user has access to that directory and file. I.E. File properties -> Security -> User permission etc.
In my experience, 9/10 times one of these things isn't setup right.
Additional note - every time you upgrade or re-install a powerbi gateway host, you will have to change the service login account and double check all permissions. I don't know why but it overwrites that setting by default disabling all refresh until restored.
Edit:
After further thinking, I believe you will eventually run into the roadblock regardless - PowerBI Service's Gateway Data Source mappings are 1-1. After upload you will get this screen in the dataset settings:
Which requires that the data source has been defined in the PowerBI service's settings:
I don't believe that it is currently possible to make that definition a variably composed string per user's request.
Dsn name can be only static and only string
Related
I deleted all the items in the DataTemplate table but when I query them again with the searchDataTemplates endpoint on the app or in AppSync it returns the old data, but when I use the listDataTemplates it returns nothing which is correct. Needed to repopulate the data in the table.
data template table
search endpoint
list endpoint
when I updated items individually it worked just fine but when i deleted all the items from the console (around 700 items) the search endpoint stopped working. Just the search
UPDATE:
I repopulated the data hoping it'd reset but now the listDataTemplates shows the new data and the search still shows the old data, is there some cache that needs to be reset?
SECOND UPDATE:
I removed the table and the appsync functions are gone however when i recreated the table (with no data) the testing out the function still returns the old data. I'm guessing the opensearch stuff hasn't been updated?
If you are using AppSync with Amplify CLI, #searchable will automatically create the followings:
An OpenSearch Domain
A Lambda Function that will be attached to the DynamoDB Streams and push the changes (create/update/delete) over to your OpenSearch Domain.
And the problem that you're facing is most likely due to the Lambda Function created failed to push the changes from DynamoDB Streams to OpenSearch. A quick suggestion is to check on the created Lambda Function first.
Reference: #searchable
This issue can only happen if caching is enabled in your application.
I am not sure what's the infrastructure you are using, so i would go ahead with some educated guess. Please feel free to correct me if i overstepped.
From your description of question, you have an AppSync as API layer and DynamoDb as primary database.
If these are the only two resources you have, please check the AppSync cache configuration.
Open AppSync console
from left panel select APIs -> your api -> caching
Validate Caching behavior is set to None
In case if you have AWS OpenSearch enabled for search query (i could be wrong, however picking up from previous comment). Then validate the cluster configuration.
Open AWS Open Search Service console
From left panel select Domains and click on the openserch domain that you are using
scroll to the bottom right and look for Advanced cluster settings and ensure the attribute Fielddata cache allocation is set to 0
If Fielddata cache allocation is not 0, update the cluster configuration and modify the advanced cluster setting to set the Fielddata cache allocation field to 0.
Wait for a few minutes (I would suggest 5 minutes) and then retry your use-case.
I hope this would help resolve your issue.
I installed apache-druid-0.22.1 as a cluster (master, data and query nodes) and enabled “druid-google-extensions” by adding it to the array druid.extensions.loadList in common.runtime.properties.
Finally I defined GOOGLE_APPLICATION_CREDENTIALS ( which has the value of service account json as defined in https://cloud.google.com/docs/authentication/production )as an environment variable of user that run the druid services.
However, I got the following error when I try to ingest data from GCR buckets:
Error: Cannot construct instance of
org.apache.druid.data.input.google.GoogleCloudStorageInputSource,
problem: Unable to provision, see the following errors: 1) Error in
custom provider, java.io.IOException: The Application Default
Credentials are not available. They are available if running on Google
App Engine, Google Compute Engine, or Google Cloud Shell. Otherwise,
the environment variable GOOGLE_APPLICATION_CREDENTIALS must be
defined pointing to a file defining the credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information. at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) while locating
com.google.api.client.http.HttpRequestInitializer for the 3rd
parameter of
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
at
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.storage.google.GoogleStorageDruidModule) while
locating org.apache.druid.storage.google.GoogleStorage 1 error at
[Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1,
column: 180] (through reference chain:
org.apache.druid.indexing.overlord.sampler.IndexTaskSamplerSpec["spec"]->org.apache.druid.indexing.common.task.IndexTask$IndexIngestionSpec["ioConfig"]->org.apache.druid.indexing.common.task.IndexTask$IndexIOConfig["inputSource"])
A case reported on this matter caught my attention. But I can not see
any verified solution to that case. Please help me.
We want to take data from GCP to on prem Druid. We don’t want to take cluster in GCP. So that we want solve this problem.
For future visitors:
If you run Druid by systemctl you then need to add required environments in service file of systemctl, to ensure it is always delivered to druid regardless of user or environment changes.
You must define the GOOGLE_APPLICATION_CREDENTIALS that points to a file path, and not contain the file content.
In a cluster (like Kubernetes), it's usual to mount a volume with the file in it, and to se the env var to point to that volume.
I am trying to run the demo code given in pdf parsing of GCP document AI. To run the code, exporting google credentials as a command line works fine. The problem comes when the code needs to run in memory and hence no credential files are allowed to be accessed from disk. Is there a way to pass the credentials in the document ai parsing function?
The sample code of google:
def main(project_id='YOUR_PROJECT_ID',
input_uri='gs://cloud-samples-data/documentai/invoice.pdf'):
"""Process a single document with the Document AI API, including
text extraction and entity extraction."""
client = documentai.DocumentUnderstandingServiceClient()
gcs_source = documentai.types.GcsSource(uri=input_uri)
# mime_type can be application/pdf, image/tiff,
# and image/gif, or application/json
input_config = documentai.types.InputConfig(
gcs_source=gcs_source, mime_type='application/pdf')
# Location can be 'us' or 'eu'
parent = 'projects/{}/locations/us'.format(project_id)
request = documentai.types.ProcessDocumentRequest(
parent=parent,
input_config=input_config)
document = client.process_document(request=request)
# All text extracted from the document
print('Document Text: {}'.format(document.text))
def _get_text(el):
"""Convert text offset indexes into text snippets.
"""
response = ''
# If a text segment spans several lines, it will
# be stored in different text segments.
for segment in el.text_anchor.text_segments:
start_index = segment.start_index
end_index = segment.end_index
response += document.text[start_index:end_index]
return response
for entity in document.entities:
print('Entity type: {}'.format(entity.type))
print('Text: {}'.format(_get_text(entity)))
print('Mention text: {}\n'.format(entity.mention_text))
When you run your workloads on GCP, you don't need to have a service account key file. You MUSTN'T!!
Why? 2 reasons:
It's useless because all GCP products have, at least, a default service account. And most of time, you can customize it. You can have a look on Cloud Function identity in your case.
Service account key file is a file. It means a lot: you can copy it, send it by email, commit it in Git repository... many persons can have access to it and you loose the management of this secret. And because it's a secret, you have to store it securely, you have to rotate it regularly (at least every 90 days, Google recommendation),... It's nighmare! When you can, don't use service account key file!
What the libraries are doing?
There are looking if GOOGLE_APPLICATION_CREDENTIALS env var exists.
There are looking into the "well know" location (when you perform a gcloud auth application-default login to allow the local application to use your credential to access to Google Resources, a file is created in a "standard location" on your computer)
If not, check if the metadata server exists (only on GCP). This server provides the authentication information to the libraries.
else raise an error.
So, simply use the correct service account in your function and provide it the correct role to achieve what you want to do.
I am trying to connect BigQuery of ProjectA to Data Fusion of ProjectB and its asking me to enter a service key file. I have tried to upload the service key file to Cloud Storage of ProjectB and provided the link but it's asking me to provide a local file path.
Can someone help me on this?
Thanks in advance.
Can you try this, grant BQ permission of project A to data fusion in project B.
service-project_number#gcp-sa-datafusion.iam.gserviceaccount.com.
project_number-compute#developer.gserviceaccount.com.
Steps:
Navigate to the customer project that contains the CDF instance and copy the project number (this is found on the Home Page in the Project Info card)
Navigate to the project that contains the resources you would like to interact with.
In the sidebar, click on ‘IAM & Admin’
Click on ‘Add’ at the top of the page.
Provide the first service account name from the table above, be sure to replace with the actual number you obtained in step 1
Grant the Admin role for the resource you would like to interact with. Ex. BigQuery Admin for reading/writing to BigQuery. For BigQuery, you will also need to grant the BigQuery Data Owner role as well.
Repeat steps 5 & 6 for the second service account in the table above.
In your pipeline, ensure you define the correct Project Id for the sources/sinks. Using ‘auto-detect’ will default to the customer project that contains the CDF instance.
Can you try download the service key json file to the local, ie you local computer? And try to put the file into some folder and provide the full path to that service key file in the BigQuery properties.
I wanted to test out PowerBI embedded so I downloaded the the sample app that is able to publish a pbix file and to embed it.
So I created the easiest PowerBI file one is able to make with Azure SQL, using the DirectQuery option, as underlying data source.
I succesfully imported the PowerBI file in my workspace collection
I changed the connection string of my PowerBI file succesfully
After that the code to patch the gateway with the username and password credentials fails
Then when I tried to view the embedded report I got this error.
I believe the connectionstring is in the correct format because it was updated succesfully. I also already tried to point it to another SQL database and then the error shows the other SQL database in the error message.
1) I thought this could be because the Gateway does not get the credentials that I gave it is that correct?
2) Does someone know how can I fix this?
Thanks in advance!
As #Cuong Le stated, this was a Microsoft Issue at first.
When the problem was fixed I still received a BadRequest exception. After trying to update the credentials with the PowerBI-CLI the problem became clearer. I needed to grant rights for Azure IP addresses to the relevant SQL database. Once I did that I was able to update the credentials. Unfortunately PowerBI API SDK's exception messages are not as good as the PowerBI-CLI messages. I also tried it with PowerBI API SDK and it also worked.
The exception message I got was the following:
[ powerbi ] {"error":{"code":"DM_GWPipeline_Gateway_DataSourceAccessError","pbi.error":{"code":"DM_GWPipeline_Gateway_DataSourceAccessError","parameters":{},"details":[{"code":"DM_ErrorDetailNameCode_UnderlyingErrorCode","detail":{"type":1,"value":"-2146232060"}},{"code":"DM_ErrorDetailNameCode_UnderlyingErrorMessage","detail":{"type":1,"value":"Cannot open server 'engiep-dev-weeu-sql' requested by the login. Client with IP address 'xx.xx.xx.213' is not allowed to access the server. To enable access, use the Windows Azure Management Portal or run sp_set_firewall_rule on the master database to create a firewall rule for this IP address or address range. It may take up to five minutes for this change to take effect."}},{"code":"DM_ErrorDetailNameCode_UnderlyingHResult","detail":{"type":1,"value":"-2146232060"}},{"code":"DM_ErrorDetailNameCode_UnderlyingNativeErrorCode","detail":{"type":1,"value":"40615"}}]}}}
The correct connectionstring format to use is:
Data Source=yourDataSource;Initial Catalog=yourDataBase;User ID=yourUser;Password=yourPass;
(Don't use quotes anywhere.)
I was experiencing the same issue. Also it is an open issue on github.
Attached Image :
enter image description here
To solve this, I used the PowerBI Cli 1.0.4 from NPM. And used Update Connection Operation,(remember to add -d).
powerbi update-connection -c [workspace name] -k [access key] -w [workspace id] -d [dataset id] -s "Data Source=xxx.database.windows.net;Initial Catalog=xxx;User ID=xxx;Password=xxx"
If it fails do it(Update-Connection Operation) again.
The issue happens since sometimes datasource credentials are not carried over to the workspace.
In the case of reports that use direct query, credentials are never brought with the pbix as an import is done. All private info are stripped out.
Hope this helps!
Thanks