Trying to get filtered values using laravel but isn't working properly, wrong result - laravel-5.5

I have users table and subscriptions table.
subscriptions table it has a foreign key column user_id and other column status of type enum and has set of values ('canceled','active','skipped','unpaid','pastdue','expired')
In model, I defined their relation like,
public function subscription()
{
return $this->hasMany(Subscription::class);
}
Now I am implementing a filter functionality
in this function in my dataTable class
public function query(User $model)
{
if ($this->_filters['subscription_status']) {
$status = $this->_filters['subscription_status'];
$model = $model->whereHas('subscription', function($query) use ($status) {
$query->where('status', $status);
});
}
Now when I tried to get the list of user with subscription status canceled
the result, I got a list of users with status active and cancelled why the code above doesn't work?

You should use with not has. You can see the different.
get the list of users with subscription status canceled
$model = $model->with(['subscription' => function ($query) use ($status) {
$query->where('status', status);
}])
get the list of users has subscription status canceled
$model = $model->whereHas('subscription', function($query) use ($status) {
$query->where('status', $status);
});

Related

AWS Amplify GraphQL subscriptions fail with "Cannot return null for non-nullable type: 'AWSDateTime' within parent 'Todo' (/onCreateChat/createdAt)"

I'm trying to add todo from lambda function in AWS.
I created flutter project, added api (Graphql basic Todo sample), added function (enabled mutations).
Lambda is working and effectively adding entry to TODO list. Unfortunately, subscription started in flutter returns error:
Cannot return null for non-nullable type: 'AWSDateTime' within parent 'Chat' (/onCreateChat/createdAt)
I see this problem got solved in Github where aleksvidak states.
I had the similar problem and what I realised is that mutation which in fact triggers the subscription has to have the same response fields as the ones specified for the subscription response. This way it works for me.
This seems to solve many people's problem. Unfortunately, I don't understand what it means for basic TODO sample code.
Mutation:
type Mutation {
createTodo(input: CreateTodoInput!, condition: ModelTodoConditionInput): Todo
...
Subscription:
type Subscription {
onCreateTodo(filter: ModelSubscriptionTodoFilterInput): Todo
#aws_subscribe(mutations: ["createTodo"])
...
Isn't this code aligned with what Aleksvidak said? Mutation has (Todo) the same response type as Subscription (Todo), right?
For my case it was missing updatedAt within Query that I perform inside Lambda function.
const query = /* GraphQL */ `
mutation CREATE_TODO($input: CreateTodoInput!) {
createTodo(input: $input) {
id
name
updatedAt // <-- this was missing
}
}
`;

How to access Amazon Redshift by #aws-sdk/client-redshift-data

I developed sample apps which backend DB is Redshift and try to execute query by following SDK code.
import { RedshiftDataClient, ExecuteStatementCommand } from '#aws-sdk/client-redshift-data';
export const resolvers: IResolvers<unknown, Context> = {
Query: {
user: (parent, args, context): User => ({ login: context.login }),
region: (): string => getRegion(),
getData: async () => {
const redshift_client = new RedshiftDataClient({});
const request = new ExecuteStatementCommand({
ClusterIdentifier: 'testrs',
Sql: `select * from test`,
SecretArn: 'arn:aws:secretsmanager:us-east-1:12345561:secret:test-HYRSWs',
Database: 'test',
});
try {
const data = await redshift_client.send(request);
console.log('data', data);
return data;
} catch (error) {
console.error(error);
throw new Error('Failed fetching data to Redshift');
} finally {
// execute regardless of error state
}
},
},
};
It returned following error
ERROR AccessDeniedException:
User: arn:aws:sts::12345561:assumed-role/WebsiteStack-Beta-US-EAST-GraphQLLambdaServiceRole1BCPB5P3Q4IS9/GraphQLLambda
is not authorized to perform: redshift-data:ExecuteStatement on resource: arn:aws:redshift:us-east-1:12345561:cluster:testrs
because no identity-based policy allows the redshift-data:ExecuteStatement action
Must I use sdk package like STS ?
If someone has opinion,or materials. will you please let me know
Thanks
I know when using the AWS SDK for Java V2 for the exact same use case, you can successfully query data by setting the ExecuteStatementRequest object and passing it to the Data Client's executeStatement like this:
if (num ==5)
sqlStatement = "SELECT TOP 5 * FROM blog ORDER BY date DESC";
else if (num ==10)
sqlStatement = "SELECT TOP 10 * FROM blog ORDER BY date DESC";
else
sqlStatement = "SELECT * FROM blog ORDER BY date DESC" ;
ExecuteStatementRequest statementRequest = ExecuteStatementRequest.builder()
.clusterIdentifier(clusterId)
.database(database)
.dbUser(dbUser)
.sql(sqlStatement)
.build();
ExecuteStatementResponse response = redshiftDataClient.executeStatement(statementRequest);
As shown here -- the required values are clusterId, database, and dbUser.
I would assume the AWS SDK for JavaScript would work the same way. (I have not tried using that SDK however).
The reference docs confirm this...
Temporary credentials - when connecting to a cluster, specify the cluster identifier, the database name, and the database user name. Also, permission to call the redshift:GetClusterCredentials operation is required. When connecting to a serverless endpoint, specify the database name.
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-redshift-data/classes/executestatementcommand.html

Dynamically Insert/Update Item in DynamoDB With Python Lambda using event['body']

I am working on a lambda function that gets called from API Gateway and updates information in dynamoDB. I have half of this working really dynamically, and im a little stuck on updating. Here is what im working with:
dynamoDB table with a partition key of guild_id
My dummy json code im using:
{
"guild_id": "126",
"guild_name": "Posted Guild",
"guild_premium": "true",
"guild_prefix": "z!"
}
Finally the lambda code:
import json
import boto3
def lambda_handler(event, context):
client = boto3.resource("dynamodb")
table = client.Table("guildtable")
itemData = json.loads(event['body'])
guild = table.get_item(Key={'guild_id':itemData['guild_id']})
#If Guild Exists, update
if 'Item' in guild:
table.update_item(Key=itemData)
responseObject = {}
responseObject['statusCode'] = 200
responseObject['headers'] = {}
responseObject['headers']['Content-Type'] = 'application/json'
responseObject['body'] = json.dumps('Updated Guild!')
return responseObject
#New Guild, Insert Guild
table.put_item(Item=itemData)
responseObject = {}
responseObject['statusCode'] = 200
responseObject['headers'] = {}
responseObject['headers']['Content-Type'] = 'application/json'
responseObject['body'] = json.dumps('Inserted Guild!')
return responseObject
The insert part is working wonderfully, How would I accomplish a similar approach with update item? Im wanting this to be as dynamic as possible so I can throw any json code (within reason) at it and it stores it in the database. I am wanting my update method to take into account adding fields down the road and handling those
I get the follow error:
Lambda execution failed with status 200 due to customer function error: An error occurred (ValidationException) when calling the UpdateItem operation: The provided key element does not match the schema.
A "The provided key element does not match the schema" error means something is wrong with Key (= primary key). Your schema's primary key is guild_id: string. Non-key attributes belong in the AttributeUpdate parameter. See the docs.
Your itemdata appears to include non-key attributes. Also ensure guild_id is a string "123" and not a number type 123.
goodKey={"guild_id": "123"}
table.update_item(Key=goodKey, UpdateExpression="SET ...")
The docs have a full update_item example.

AWS AppSync RDS: $util.rds.toJSONObject() Nested Objects

I am using Amazon RDS with AppSync. I've created a resolver that join two tables to get One-to-One association between them and returns columns from both tables. What I would like to do is to be able to put nest some columns under a key in the resulting parsed JSON object evaluated using $util.rds.toJSONObject().
Here's the schema:
type Parent {
col1: String
col2: String
child: Child
}
type Child {
col3: String
col4: String
}
Here's the resolver:
{
"version": "2018-05-29",
"statements": [
"SELECT parent.*, child.col3 AS `child.col3`, child.col4 AS `child.col4` FROM parent LEFT JOIN child ON parent.col1 = child.col3"
]
}
I tried naming the resulting column with dot-syntax but, $util.rds.toJSONObject() doesn't put col3 and col4 under child key. The reason it should is because otherwise, Apollo won't be able to cache and parse the entity.
Note: Dot-syntax is not documented anywhere. Usually, some ORMs use dot-syntax technique to convert SQL rows to proper nested JSON objects.
The #Aaron_H comment and answer were useful for me, but the response mapping template provided in the answer didn't work for me.
I managed to get a working response mapping template for my case which is similar for the case in question. In images below you will find info for query -> message(id: ID) { ... } (one message and the associated user will be returned):
SQL request to user table;
SQL request to message table;
SQL JOIN tables request for message id=1;
GraphQL Schema;
Request and response templates;
AWS AppSync query.
https://github.com/xai1983kbu/apollo-server/blob/pulumi_appsync_2/bff_pulumi/graphql/resolvers/Query.message.js
Next example for query messages
https://github.com/xai1983kbu/apollo-server/blob/pulumi_appsync_2/bff_pulumi/graphql/resolvers/Query.messages.js
Assuming your resolver is expecting to return a list of Parent types, ie. [Parent!]!, you can write your response mapping template logic like this:
#if($ctx.error)
$util.error($ctx.error.message, $ctx.error.type)
#end
#set($output = $utils.rds.toJsonObject($ctx.result)[0])
## Make sure to handle instances where fields are null
## or don't exist according to your business logic
#foreach( $item in $output )
#set($item.child = {
"col3": $item.get("child.col3"),
"col4": $item.get("child.col4")
})
#end
$util.toJson($output)

Fetch record with foreign key in laravel5

I am using Laravel 5. I have two tables named cars and providers. There is a foreign key in cars table named provider_id of provider table.
My database schema as follows-
Schema::create('cars', function(Blueprint $table)
{
$table->increments('id');
$table->integer('provider_id')->unsigned();
$table->foreign('provider_id')->references('id')->on('providers')->onDelete('restrict');
});
Schema::create('providers', function(Blueprint $table) {
$table->increments('id');
$table->string('company_name');
});
Now, I want to fetch cars information as well as provider info of that particular id.
Take a look at this
Inside your Car model, create a provider method:
public function provider()
{
return this->belongsTo(\App\Provider);
}
Then, you can do something like:
$car = \App\Car::find(id);
And access the provider this way:
$car->provider();
Also, inside your provider model, add this:
public function cars()
{
return $this->hasMany(\App\Car);
}
And you'll be able to access all cars from a particular provider.