Adding a source in a gateway - powerbi

This is the error im facing when trying to add a datasource in a gateway:
Unable to connect: We encountered an error while trying to connect to
. Details: "We could not register this data source for any gateway
instances within this cluster. Please find more details below about
specific errors for each gateway instance.
Activity ID:
66610131-d0fc-4787-9432-36b2bbc95dbb
Request ID:
b9231dc4-dd80-8b86-6301-c171aad3b879
Cluster URI:
https://wabi-south-east-asia-redirect.analysis.windows.net
Status code:
400
Error Code:
DMTS_PublishDatasourceToClusterErrorCode
Time:
Wed Oct 17 2018 12:48:44 GMT-0700 (Pacific Daylight Time)
Version:
13.0.6980.207
434Gateway:
Invalid connection credentials.
Underlying error code:
-2147467259
Underlying error message:
The credentials provided for the File source are invalid. (Source at c:\users\rohan\documents\latest 2018\sara\new folder\2018_sales.xls.)
DM_ErrorDetailNameCode_UnderlyingHResult:
-2147467259
Microsoft.Data.Mashup.CredentialError.DataSourceKind:
File
Microsoft.Data.Mashup.CredentialError.DataSourcePath:
c:\users\rohan\documents\latest 2018\sara\new folder\2018_sales.xls
Microsoft.Data.Mashup.CredentialError.Reason:
AccessUnauthorized
Microsoft.Data.Mashup.MashupSecurityException.DataSources:
[{"kind":"File","path":"c:\\users\\rohan\\documents\\latest 2018\\sara\\new folder\\2018_sales.xls"}]
Microsoft.Data.Mashup.MashupSecurityException.Reason:
AccessUnauthorized
Troubleshoot connection problems

Related

Cant Upload image mikrotik-chr on google cloud

I start make image mikrotik-chr from my bucket but always error. I dontt know how to fix it
[inflate.import-virtual-disk]: 2021-08-16T05:39:39Z CreateInstances: Creating instance "inst-importer-inflate-6t2qt".
[inflate]: 2021-08-16T05:39:46Z Error running workflow: step "import-virtual-disk" run error: operation failed &{ClientOperationId: CreationTimestamp: Description: EndTime:2021-08-15T22:39:46.802-07:00 Error:0xc00007b770 HttpErrorMessage:SERVICE UNAVAILABLE HttpErrorStatusCode:503 Id:1873370325760361715 InsertTime:2021-08-15T22:39:40.692-07:00 Kind:compute#operation Name:operation-1629092379433-5c9a6a095186f-620afe4b-ba26ba50 OperationGroupId: OperationType:insert Progress:100 Region: SelfLink:https://www.googleapis.com/compute/v1/projects/circular-jet-322614/zones/asia-southeast2-a/operations/operation-1629092379433-5c9a6a095186f-620afe4b-ba26ba50 StartTime:2021-08-15T22:39:40.692-07:00 Status:DONE StatusMessage: TargetId:6947401086746772724 TargetLink:https://www.googleapis.com/compute/v1/projects/circular-jet-322614/zones/asia-southeast2-a/instances/inst-importer-inflate-6t2qt User:606260965808#cloudbuild.gserviceaccount.com Warnings:[] Zone:https://www.googleapis.com/compute/v1/projects/circular-jet-322614/zones/asia-southeast2-a ServerResponse:{HTTPStatusCode:200 Header:map[Cache-Control:[private] Content-Type:[application/json; charset=UTF-8] Date:[Mon, 16 Aug 2021 05:39:46 GMT] Server:[ESF] Vary:[Origin X-Origin Referer] X-Content-Type-Options:[nosniff] X-Frame-Options:[SAMEORIGIN] X-Xss-Protection:[0]]} ForceSendFields:[] NullFields:[]}:
Code: ZONE_RESOURCE_POOL_EXHAUSTED
Message: The zone 'projects/circular-jet-322614/zones/asia-southeast2-a' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
[inflate]: 2021-08-16T05:39:46Z Workflow "inflate" cleaning up (this may take up to 2 minutes).
[inflate]: 2021-08-16T05:39:48Z Workflow "inflate" finished cleanup.
[import-image]: 2021-08-16T05:39:48Z Finished creating Google Compute Engine disk
[import-image]: 2021-08-16T05:39:49Z step "import-virtual-disk" run error: operation failed &{ClientOperationId: CreationTimestamp: Description: EndTime:2021-08-15T22:39:46.802-07:00 Error:0xc00007b770 HttpErrorMessage:SERVICE UNAVAILABLE HttpErrorStatusCode:503 Id:1873370325760361715 InsertTime:2021-08-15T22:39:40.692-07:00 Kind:compute#operation Name:operation-1629092379433-5c9a6a095186f-620afe4b-ba26ba50 OperationGroupId: OperationType:insert Progress:100 Region: SelfLink:https://www.googleapis.com/compute/v1/projects/circular-jet-322614/zones/asia-southeast2-a/operations/operation-1629092379433-5c9a6a095186f-620afe4b-ba26ba50 StartTime:2021-08-15T22:39:40.692-07:00 Status:DONE StatusMessage: TargetId:6947401086746772724 TargetLink:https://www.googleapis.com/compute/v1/projects/circular-jet-322614/zones/asia-southeast2-a/instances/inst-importer-inflate-6t2qt User:606260965808#cloudbuild.gserviceaccount.com Warnings:[] Zone:https://www.googleapis.com/compute/v1/projects/circular-jet-322614/zones/asia-southeast2-a ServerResponse:{HTTPStatusCode:200 Header:map[Cache-Control:[private] Content-Type:[application/json; charset=UTF-8] Date:[Mon, 16 Aug 2021 05:39:46 GMT] Server:[ESF] Vary:[Origin X-Origin Referer] X-Content-Type-Options:[nosniff] X-Frame-Options:[SAMEORIGIN] X-Xss-Protection:[0]]} ForceSendFields:[] NullFields:[]}: Code: ZONE_RESOURCE_POOL_EXHAUSTED; Message: The zone 'projects/circular-jet-322614/zones/asia-southeast2-a' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
ERROR
ERROR: build step 0 "gcr.io/compute-image-tools/gce_vm_image_import:release" failed: step exited with non-zero status: 1
You will need to check if you have enough CPUs and other resources quota in 'projects/circular-jet-322614/zones/asia-southeast2-a'. Resource requirement can be found by looking at the deployment specs of the workload.

Error when trying to create an iot component in AWS greengrass version 2 using cloudformation

I recently did a configuration where I created a component in using the aws console for greengrass from a recipe and another where I imported the config from a lambda file. They both work well when I do it using the aws console. However, I want to be able to produce this same configuration using cloudformation. I have read the documentation here component version and it says I can be able to add a recipe file inline or send it a lambda function using the LambdaFunctionRecipeSource. However all my attempt fail with the error
Resource handler returned message: "User: arn:aws:iam::accountIDHere:user/harisu is not
authorized to perform: null (Service: GreengrassV2, Status Code: 403, Request ID: f517f1ff-a387-
4380-8a47-bd6d41fd628e, Extended Request ID: null)"
(RequestToken: d6f8042d-687e-0afa-e75d-d80f27a7f177, HandlerErrorCode: AccessDenied)
I have however granted administrator access to the user harisu and I ensured he has the full access to the greengrass service.
My example cfn file is
TestComponentVersion:
Type: AWS::GreengrassV2::ComponentVersion
Properties:
InlineRecipe: "---
RecipeFormatVersion: '2020-01-25'
ComponentName: com.example.HelloWorld
ComponentVersion: 1.0.0
ComponentDescription: My first AWS IoT Greengrass component.
ComponentPublisher: Amazon
ComponentConfiguration:
DefaultConfiguration:
Message: world
Manifests:
- Name: Linux
Platform:
os: linux
Lifecycle:
Run: |
python3 {artifacts:path}/hello_world.py '{configuration:/Message}'
Artifacts:
- URI: s3://DOC-EXAMPLE-BUCKET/artifacts/com.example.HelloWorld/1.0.0/hello_world.py
"
I will appreciate any help
For anyone else who comes across this when troubleshooting, refer to https://greengrassv2.workshop.aws/en/chapter4_createfirstcomp/30_step3.html. It states that this is typically caused by the recipe definition not being valid json or yaml. In my case this was true, I had a yaml syntax error. I don't immediately see if that is the case here or where your error is.

error "No node was available to execute the query" while using MCS service of AWS

after some time we got error:
2020-03-30 14:09:45 +0000 [http-nio-12082-exec-10] ERROR c.i.i.c.u.c.GenericCassandraDao 2fcc2418-2a0f-401e-b51f-b57fc0b305ea -
Error in inserting data for tenant code: anne2020-03-30 14:09:45 +0000
[http-nio-12082-exec-10] ERROR c.i.i.c.u.c.GenericCassandraDao
2fcc2418-2a0f-401e-b51f-b57fc0b305ea - Error in inserting data for
tenant code:
annecom.datastax.oss.driver.api.core.NoNodeAvailableException: No node
was available to execute the query at
com.datastax.oss.driver.api.core.NoNodeAvailableException.copy(NoNodeAvailableException.java:40)
at
com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:149)
at
com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:53)
at
com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:30)
at
com.datastax.oss.driver.internal.core.session.DefaultSession.execute(DefaultSession.java:210)
at
com.datastax.oss.driver.api.core.cql.SyncCqlSession.execute(SyncCqlSession.java:53)
at
us.cassandra.GenericCassandraDao.save(GenericCassandraDao.java:79)
at
us.cassandra.implementations.BasicCassandraStorage.lambda$save$0(BasicCassandraStorage.java:26)
at
io.github.resilience4j.circuitbreaker.CircuitBreaker.lambda$decorateSupplier$4(CircuitBreaker.java:536)
at
us.cassandra.implementations.CircuitBreakerStorage.wrap(CircuitBreakerStorage.java:34)
Restart helps but its not the solution. Appreciate if anyone can help me find the issue.
You can execute nodetool status in cassandra and you will obtain the name of the Datacenter.
...$ ./nodetool status
Then,
Change .withLocalDatacenter("us-east-1") to .withLocalDatacenter("your-data-center-name") and retry.
Kindly follow this link = https://docs.aws.amazon.com/mcs/latest/devguide/ManagedCassandraService.pdf

Connect to Athena using SQL workbench

I am trying to connect to Athena using SQL workbench. I followed all the instructions from page 15 to 19 mentioned in this PDF file:
https://s3.amazonaws.com/athena-downloads/drivers/JDBC/SimbaAthenaJDBC_2.0.7/docs/Simba+Athena+JDBC+Driver+Install+and+Configuration+Guide.pdf
If I use the default athena bucket name, I get this error:
S3://aws-athena-query-results-51346970XXXX-us-east-1/Unsaved
[Simba]AthenaJDBC An error has been thrown from the AWS SDK
client. Unable to execute HTTP request: No such host is known
(athena.useast-1.amazonaws.com) [Execution ID not available]
For any other bucketname I get this error:
s3://todel162/testfolder-1
[Simba]AthenaJDBC An error has been thrown from the AWS SDK
client. Unable to execute HTTP request: athena.useast-1.amazonaws.com
[Execution ID not available]
How do I connect to Athena using JDBC client?
Using copy-paste had an issue with the string on page 16:
jdbc:awsathena://AwsRegion=useast-1;
It should have a - like this...
jdbc:awsathena://AwsRegion=us-east-1;
Once I corrected this, I was able to connect.

How to determine which particular permission is missing for using a particular AWS feature?

I regularly run into issues that one particular action is missing from the IAM user using AWS. A good example is uploading data to S3. Hadoop throws the following exception:
2018-08-03 09:29:46,112 INFO [IPC Server handler 27 on 42415] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1526322305732_0008_m_000008_0 is : 0.0
2018-08-03 09:29:46,134 FATAL [IPC Server handler 26 on 42415] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1526322305732_0008_m_000008_0 - exited : com.cloudera.com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 008CAB66479B6842), S3 Extended Request ID: JtUlF07hBNr03NhytAQj6biGX8I/YKjtbUcz82PkjbLoDeoW3W8AVLvhAdXWk7V9Fc8G4oOy1d8=
at com.cloudera.com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
at com.cloudera.com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
at com.cloudera.com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
at com.cloudera.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at com.cloudera.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
at com.cloudera.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1050)
at com.cloudera.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1027)
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:961)
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:78)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1412)
at org.apache.hadoop.tools.mapred.CopyMapper.setup(CopyMapper.java:114)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
This exception does not have enough information to determine which action is missing for the user. What is the best way of identifying which action must be added for such a task?
We frequently use CloudTrail, where you can see what was the attempted call and if the response was successful or not, and what was the reason.