I have upgraded amplify version from 4.45.0 to 5.2.0,
Now when I do amplify push, I am getting the following error:
Following resources failed
Resource Name: ci5lt23eofhvxlc3an7db4d7veGraphQLSchema (AWS::AppSync::GraphQLSchema)
Event Type: update
Reason: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: GZQF1T9HWCCED75H; S3 Extended Request ID: qicfcwF2YUNdqMDU3EUkt+hsXIQawcirDG7TIX+peEEkAWOE1v9ee6n2L5Qc2I8uePyAXg2eJ4U=; Proxy: null)
Removed error after pushing it with force as: amplify push --force
Forcefully pushing to the cloud did not help in my case.
Turns out my graphql secret key has an expiration period which i realized after running amplify update api and here it was:
Authorization modes
- Default: API key expiring Tue Oct 25 2022 ....
then I renewed my key. it worked out fine...
I faced the same issue and I found out that it was due to this part on the graphQL file :
This "input" configures a global authorization rule to enable public access to all models in this schema. Learn more about authorization rules here: https://docs.amplify.aws/cli/graphql/authorization-rules
input AMPLIFY { globalAuthRule: AuthRule = { allow: public } } # FOR TESTING ONLY!
type User #model {
id: ID!
username: String!
email: String!
}
the solution is to remove everything before type then do amplify-push on your shell
Related
My ask is to generate a token for Power Bi so to achieve it I created an application for organization by selecting Embedded for your organization
So, to integrate with Azure AD created the application in Azure AD.
The endpoint I used is POST: https://login.microsoftonline.com/common/oauth2/token
and in the body tab, I gave the below inputs:
data: {
grant_type: password
scope:
resource:
client_id:
username:
password:
}
My requirement is to generate token with ROPC authentication flow only.
This is the page I am referring to Solved: Re: How to generate the authorization code and the... - Microsoft Power BI Community
But after hitting send got this error:
{
"error": "invalid_request",
"error_description": "AADSTS900144: The request body must contain the following parameter: 'grant_type'.\r\nTrace ID: \r\nCorrelation ID: \r\nTimestamp: 2022-09-25 13:40:05Z",
]
Not sure what am I missing in configuring the application or postman. Any suggestions?
I tried to reproduce the same in my environment and got the same error as below:
POST: https://login.microsoftonline.com/common/oauth2/token
data: {
grant_type: password
scope: openid
resource: https://analysis.windows.net/powerbi/api
client_id: ******
username: ******
password: ******
}
If your application is not public, then add client_secret parameter like below:
To resolve the error, add the parameters in Body tab -> x-www-form-urlencoded
I am able to generate the access token successfully like below:
I have two http endpoints setup:
GET /users/{userId}
GET /users/{userId}/notes/{noteId}
The GET User returns a payload which includes a list of multiple noteIds, which can be used to make multiple requests to the GET Note endpoint.
I am trying to configure Appsync to be able to fetch all of this data in a single query, but I can't get the list to populate with objects.
Schema:
type Query {
getUser(userId: String!): User
getNote(userId: String!, noteId: String!): Note
}
type User {
userId: ID!
firstName: String!
lastName: String!
notes: [Note]
}
type Note {
noteId: ID!
noteText: String!
createdDatetime: Int!
}
I have a data source setup for each of the endpoints and I have a resolver for getUser and for getNote - I also have a resolver for User.notes which is the same as getNote. These resolvers have this response mapping:
#if($ctx.error)
$util.error($ctx.error.message, $ctx.error.type)
#end
#if($ctx.result.statusCode == 200)
$ctx.result.body
#else
$utils.appendError($ctx.result.body, "$ctx.result.statusCode")
#end
My resolver for the GET Note (including User.note field resolver) endpoint looks like this:
{
"version": "2018-05-29",
"method": "GET",
"resourcePath": $util.toJson("/prod/users/$ctx.args.userId/notes/$ctx.args.noteId"),
"params":{
"headers":{
"Content-Type": "application/json",
}
}
}
I can see from the logs, that Appsync attempts to run the GET Note resolver, but that the resource path doesn't seem to get populated with any ids? (I can see this in the custom Authorizer on the endpoint, which logs out the method ARN which still includes the $ctx.args...
It feels like this is a common use case, but I can't find a solution, or examples anywhere. Is my approach correct, or do I need a different solution?
I think the first problem is with your User.notes resolver and how you are accessing userId and noteId. When you have field resolvers, you should use ctx.source to access the the parent field [Ref.]. For example, you should use ctx.source.userId in your User.notes field resolver.
Secondly, as you are going to fetch individual notes from your getNote HTTP endpoint, AppSync supports this type of behavior when proxied through AWS Lambda using BatchInvoke. Please see "Advanced Use Case: Batching" on this link to get better idea. Also, I think this SO post is relevant to your use case.
One other possibility is to have another HTTP endpoint to get all the user's notes at once but I am not sure if this is possible in your case.
I have a service account with domain wide delegation setup and I'm trying to create new accounts(google-api-services-admin-directory) using the service account and then add some preset calendars(google-api-services-calendar) to the newly created accounts.
I've had no problems with the directory api. I've had to create a delegated (Admin) User using the service account and all the directory-api calls work fine.
However, I've been having trouble in getting the calendar-api calls to work.
Java dependencies:
compile group: 'com.google.auth', name: 'google-auth-library-oauth2-http', version:'0.20.0'
compile group: 'com.google.apis', name: 'google-api-services-admin-directory', version:'directory_v1-rev53-1.20.0'
compile group: 'com.google.apis', name: 'google-api-services-calendar', version:'v3-rev20200315-1.30.9'
Java code:
private static final JsonFactory JSON_FACTORY = JacksonFactory.getDefaultInstance();
private static final List<String> SCOPES =
Arrays.asList(DirectoryScopes.ADMIN_DIRECTORY_USER, DirectoryScopes.ADMIN_DIRECTORY_GROUP,
CalendarScopes.CALENDAR);
private static final String CREDENTIALS_FILE_PATH = "config/google-service-account-credentials.json";
.....
HTTP_TRANSPORT = GoogleNetHttpTransport.newTrustedTransport();
sourceCredentials =
ServiceAccountCredentials.fromStream(new FileInputStream(CREDENTIALS_FILE_PATH));
sourceCredentials = (ServiceAccountCredentials) sourceCredentials.createScoped(SCOPES);
.....
GoogleCredentials targetCredentials = sourceCredentials.createDelegated("newuser#email");
HttpRequestInitializer requestInitializer = new HttpCredentialsAdapter(targetCredentials);
targetCredentials.refreshIfExpired();//Not sure if this is required. It didn't help though
Calendar calendarService = new Calendar.Builder(HTTP_TRANSPORT, JSON_FACTORY, requestInitializer).setApplicationName(MainApp.SERVICE_NAME).build();
for (String calendarKey : listOfCalendars)) {
CalendarListEntry cle = new CalendarListEntry();
cle.setId(calendarKey);
calendarService.calendarList().insert(cle).execute();//Fails with a 401
}
Stack Trace :
Caused by: java.io.IOException: Error getting access token for service account: 401 Unauthorized
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:444)
at com.google.auth.oauth2.OAuth2Credentials.refresh(OAuth2Credentials.java:157)
at com.google.auth.oauth2.OAuth2Credentials.refreshIfExpired(OAuth2Credentials.java:174)
at myApp.GSuiteSDKHelper.updateDefaultCalendars(GSuiteSDKHelper.java:169)
... 65 more
Caused by: com.google.api.client.http.HttpResponseException: 401 Unauthorized
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1113)
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:441)
... 68 mo
And the interesting part is that the error is intermittent. After a redeploy, I can always get my first attempt to work. Following that, it is a hit or miss.
I did add the service account to the calendars that I'm trying to add and also ensure the service account is an "owner" on the calendars.
Something similar happened to me, in my case I could solve it by adding the scopes: "https://www.googleapis.com/auth/userinfo.email","https://www.googleapis.com/auth/userinfo.profile"
I am trying to listObjects from GCS bucket using latest aws-sdk java library.
Refer code snippet here
ClientConfiguration clientConfiguration = new ClientConfiguration();
// Solution is update the Signer Version.
clientConfiguration.setSignerOverride("S3SignerType");
AWSCredentials awsCredentials = new BasicAWSCredentials("XXX","XXX");
AmazonS3 amazonS3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCredentials))
.withClientConfiguration(clientConfiguration)
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("https://storage.googleapis.com","Multi-Regional")).build();
String bucketName = "bucket_name";
// List Objects
amazonS3Client.listObject(bucketName);
But receiving invalid arguments.Refer Error and DEBUG logs
Also I am able to do with getObjects and putObjects with above amazonS3Client.
Any ideas?
2017-11-13 17:54:15,360 [main] DEBUG com.amazonaws.request - Sending Request: GET https://bucket_name.storage.googleapis.com / Parameters: ({"encoding-type":["url"]}Headers: (User-Agent: aws-sdk-java/1.11.158 Linux/4.10.0-38-generic Java_HotSpot(TM)_64-Bit_Server_VM/25.131-b11/1.8.0_131, amz-sdk-invocation-id: 121cd76e-1374-4e5d-9e68-be22ee2ad17a, Content-Type: application/octet-stream, )
2017-11-13 17:54:16,316 [main] DEBUG com.amazonaws.request - Received error response: com.amazonaws.services.s3.model.AmazonS3Exception: Invalid argument. (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: null), S3 Extended Request ID: null
Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: Invalid argument. (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: null), S3 Extended Request ID: null
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1588)
at
S3 has a parameter to its object list call called "encodingtype" that, when set to "url", encodes characters that can't be rendered natively in XML 1.0 using URL encoding. The client library appears to be using that flag. I don't believe that GCS's XML API supports that parameter, and so the call will fail with an InvalidArgument error.
You can probably avoid this by using a ListObjectRequest and calling setEncodingType(null), although I haven't tried.
I'm in the same boat but with the aws-sdk php library. I tracked down the issue to the constructor for the S3Client class where a bunch of middleware are added, one of which is setting the encoding type. Commenting out this line allows me to successfully perform the request, so I know I'm on the right track.
$stack->appendSign(PutObjectUrlMiddleware::wrap(), 's3.put_object_url');
$stack->appendSign(PermanentRedirectMiddleware::wrap(), 's3.permanent_redirect');
$stack->appendInit(Middleware::sourceFile($this->getApi()), 's3.source_file');
$stack->appendInit($this->getSaveAsParameter(), 's3.save_as');
$stack->appendInit($this->getLocationConstraintMiddleware(), 's3.location');
// $stack->appendInit($this->getEncodingTypeMiddleware(), 's3.auto_encode');
$stack->appendInit($this->getHeadObjectMiddleware(), 's3.head_object');
See if aws-sdk for java at least gives you some options to conditionally apply the middleware, but it appears that it's no dice with the php version.
As Brandon and Pez have noticed, GCS does not like the EncodingType header that's being added by the S3Client natively.
Fortunately, there's an easy way to fix this using a piece of middleware. This avoids edits to the vendor folder which should generally be avoided.
use Aws\Middleware;
$client = new S3Client([
'credentials' => [
'key' => 'XXXX',
'secret' => 'YYYY'
],
'region' => 'europe',
'endpoint' => 'https://storage.googleapis.com',
'version' => 'latest'
]);
$middleware = Middleware::tap(function ($command, $request = null) {
unset($command['EncodingType']);
});
$client->getHandlerList()->appendInit($middleware, 'encode-type-interceptor');
See also: https://blog.bandhosting.nl/blog/avoid-listobjects-invalid-query-parameter-s-encoding-type-errors-when-using-s3-with-google-cloud-storage
I ran into the same issue and found that Amazon Java API includes several hooks during the S3 calls that can be used to remove the encoding-type from the HTTP request.
{
return AmazonS3ClientBuilder.standard()//
.withCredentials(getAwsCredentialsProvider())//
.withEndpointConfiguration(getEndpointConfiguration(regionName))//
.withRequestHandlers(new GoogleRequestHandler()) // IMPORTANT PART
.build();
}
public class GoogleRequestHandler extends RequestHandler2 {
#Override
public void beforeRequest(Request<?> request) {
// google does not support the encoding-type parameter so just remove it from the request
// This appears to be only true for ListObjects
if (request.getOriginalRequest() instanceof ListObjectsRequest) {
Map<String, List<String>> params = request.getParameters();
params.remove("encoding-type");
}
}
}
See RequestHandler2 for more documentation.
I'm trying to use AWS SDK for Go to automate app runs in AWS Device Farm. But any app that uploaded with Go version of SDK never changed status from "INITIALIZED". If I upload them via AWS Console web UI, then all will be fine.
Example of code for upload:
func uploadApp(client *devicefarm.DeviceFarm, appType, projectArn string) string {
params := &devicefarm.CreateUploadInput{
Name: aws.String(*appName),
ProjectArn: aws.String(projectArn),
Type: aws.String(appType),
}
resp, err := client.CreateUpload(params)
if err != nil {
log.Fatal("Failed to upload an app because of: ", err.Error())
}
log.Println("Upload ARN:", *resp.Upload.Arn)
return *resp.Upload.Arn
}
In response I got something like:
{
Upload: {
Arn: "arn:aws:devicefarm:us-west-2:091463382595:upload:c632e325-266b-4bda-a74d-0acec1e2a5ae/9fbbf140-e377-4de9-b7df-dd18a21b2bca",
Created: 2016-01-15 14:27:31 +0000 UTC,
Name: "app-debug-unaligned.apk",
Status: "INITIALIZED",
Type: "ANDROID_APP",
Url: "bla-bla-bla"
}
}
With time status never changes from "INITIALIZED". As I mentioned, apps which run scheduled from UI works fine.
How to figure it out reason of this ?
=======================================
Solution:
1) After CreateUpload it requires to upload a file using pre-signed S3 link in the response
2) Upload should be executed via HTTP PUT request by received URL with file content in the body
3) In &devicefarm.CreateUploadInput should be specified ContentTypeparameter. For PUT request same value for Content-Type header should be used
4) If PUT request will be send from Go code, then Content-Length header should be set manually
When you call the CreateUpload API, Device Farm will return an "Upload" response containing a "Url" field.
{
Upload: {
Arn: "arn:aws:devicefarm:us-west-2:....",
Created: 2016-01-15 14:27:31 +0000 UTC,
Name: "app-name.apk",
Status: "INITIALIZED",
Type: "ANDROID_APP",
Url: "bla-bla-bla"
}
}
The returned url, "bla-bla-bla", is a pre-signed S3 url for you to upload your application. Documentation on using a pre-signed url to upload an object: http://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html
Once your application has been uploaded, the app will be processed. The status of your upload will change to "PROCESSING" and "SUCCEEDED" (or "FAILED" if something is wrong). Once it's in "SUCCEEDED" status, you can use it to schedule a run.