Mapping restful architecture to CRUD functionality - web-services

I am trying to develop my first restful service in Java and having some trouble with mapping the methods to CRUD functionality.
My uri structure is as follows and maps to basic database structure:
/databases/{schema}/{table}/
/databases is static
{schema} and {table} are dynamic and react upon the path parameter
This is what I have:
Method - URI - DATA - Comment
---------------------------------------------------------------------
GET - /databases - none - returns a list of databases
POST - /databases - database1 - creates a database named database1
DELETE - /databases - database1 - deletes the database1 database
PUT - /databases - daatbase1 - updates database1
Currently in the example above I am passing through the database name as a JSON object. However, I am unsure if this is correct. Should I instead be doing this (using DELETE method as an example):
Method - URI - DATA - Comment
---------------------------------------------------------------------
DELETE - /databases/database1 - none - deletes the database with the same name
If this is the correct method and I needed to pass extra data would the below then be correct:
Method - URI - DATA - Comment
---------------------------------------------------------------------
DELETE - /databases/database1 - some data - deletes the database with the same name
Any comments would be appreciated

REST is an interface into your domain. Thus, if you want to expose a database then CRUD will probably work. But there is much more to REST (see below)
REST-afarians will object to your service being RESTful since if does not fit one of the key constraints: The Hypermedia Constraint. But, that can be addressed if you add links to the documents (hypermedia) that your service will generate / serve. See the Hypermedia constrain. After this your users will follow links and forms to change stuff in the application. (Database, tables and rows in your example) :
- GET /database -> List of databases
- GET /database/{name} -> List of tables
- GET /database/{name}/{table}?page=1 -> First set of rows in table XXXXX
- POST /database/{name}/{table} -> Create a record
- PUT /database/{name}/{table}/{PK} -> Update a record
- DELETE /database/{name}/{table}/{PK} -> Send the record to the big PC in the sky..
Don't forget to add links to you documents!
Using REST for CRUD is kind of like putting it in a Straitjacket :) : Your URIs can represent any concept. Thus, how about trying to expose some more creative / rich URIs based on the underling resources (functionality) that you want your service or web app to do.
Take a look at at this great article: How to GET a Cup of Coffee

Related

GCP load balancers - query params or custom method

I was trying to set up a load balancer to cache requests only when a query param is present in the request or a custom method on the API is called.
Either something like:
BASE_URL/api/v1/asset/UUID?static_only=true
or something like
BASE_URL/api/v1/asset/UUID:static
I've set up 2 NEG linked to the same cloud run service. One of them has the CDN active so that it can cache responses.
Since this API has some content that can require other calls to other services that are subject to change, I want to give the option to cache the request when the query param is present.
I tried to set up a url-map with the following configuration:
defaultService: DEFAULT_BACKEND
name: path-matcher-1
routeRules:
- matchRules:
- queryParameterMatches:
- exactMatch: 'true'
name: static_only
prefixMatch: /api/v1/asset/*
priority: 1
service: BACKEND-2
I couldn't find examples on the custom method, but the tries I had with the query param didn't work. Is my approach wrong?
I guess there are two parts on this question.
How do I configure the URL map to accept a path that ends with a dynamic value? (in this case the UUID of the resource) The * doesn't seem to work.
Is it possible to configure the URL map to use custom methods (AIP-136)?

Generate postman collection with custom structure using openapi

I'm implementing some better workflow to document our team's APIs, and since we'll need later on to sell some API we absolutely need to have a great documentation. We're using postman and as far as I can tell, the correct workflow would to be to create the collection from the openapi description.
Our api are structured in this way:
- {{baseUrl}}/basePath1/serviceName1
- {{baseUrl}}/basePath1/serviceName2
- {{baseUrl}}/basePath2/serviceName1
- {{baseUrl}}/basePath2/serviceName2
- {{baseUrl}}/basePath2/serviceName3
Since we want to create some test suites we would like to split the API in main folders based on the basePath, and then subfolders that describe an entire actions, for example cleaning multiple infos using different APIs. Having then a structure like this:
/basePath
/cleanAll
- api1
- api2
/saveInfo
-api1
-api2
Since using tags create multiple folders at the top level, how am I supposed to use openapi paths or tags so that I can generate collection in the way I described?

AWS Amplify filter for #searchable annotation

Currently I am using a DynamoDB instance for my social media application. While designing the schema I sticked to the "one table" rule. So I am putting every data in the same table like posts, users, comments etc. Now I want to make flexible queries for my data. Here I found out that I could use the #searchable annotation to create an Elastic Search instance for a table which is annotated with #model
In my GraphQL schema I only have one #model, since I only have one table. My problem now is that I don't want to make everything in the table searchable, since that would be most likely very expensive. There are some data which don't have to be added to the Elastic Search instance (For example comment related data). How could I handle it ? Do I really have to split my schema down into multiple tables to be able to manage the #searchable annotation ? Couldn't I decide If the row should be stored to the Elastic Search with help of the Partitionkey / Primarykey, acting like a filter ?
The current implementation of the amplify-cli uses a predefined python Lambda that are added once we add the #searchable directive to one of our models.
The Lambda code can not be edited and currently, there is no option to define a custom Lambda, you read about it
https://github.com/aws-amplify/amplify-cli/issues/1113
https://github.com/aws-amplify/amplify-cli/issues/1022
If you want a custom Lambda where you can filter what goes to the Elasticsearch Instance, you can follow the steps described here https://github.com/aws-amplify/amplify-cli/issues/1113#issuecomment-476193632
The closest you can get is by creating a template in amplify\backend\api\myapiname\stacks\ where you can manage all the resources related to Elasticsearch. A good start point is to
Add #searchable to one of your model in the schema.grapql
Run amplify api gql-compile
Copy the generated template in the build folder, \amplify\backend\api\myapiname\build\stacks\SearchableStack.json to amplify\backend\api\myapiname\stacks\
Remove the #searchable directive from the model added in step 1
Start editing your new template copied in step 3
Add a Lambda and use it in the template as the resolver for the DynamoDB Stream
Using this approach will give you total control of the resources related to the Elasticsearch service, but, will also require to do it all by your own.
Or, just go by creating a table for each model.
Hope it helps
It is now possible to override the generated streaming function code as well.
thanks to the AWS Support for the information provided
leaved a message on the related github issue as well https://github.com/aws-amplify/amplify-category-api/issues/437#issuecomment-1351556948
All you need is to run
amplify override api
edit the corresponding overrode.ts
change the code with the resources.opensearch.OpenSearchStreamingLambdaFunction.code
resources.opensearch.OpenSearchStreamingLambdaFunction.functionName = 'python_streaming_function';
resources.opensearch.OpenSearchStreamingLambdaFunction.handler = 'index.lambda_handler';
resources.opensearch.OpenSearchStreamingLambdaFunction.code = {
zipFile: `
# python streaming function customized code goes here
`
}
Resources:
[1] https://docs.amplify.aws/cli/graphql/override/#customize-amplify-generated-resources-for-searchable-opensearch-directive
[2]AWS::Lambda::Function Code - Properties - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html#aws-properties-lambda-function-code-properties

DynamoDB Schema Advice

I am practicing with working with DynamoDB and other serverless tools offered by AWS. I am used to working with relational databases like MySQL so with with Dynamo has been a little bit of a challenge for me.
Basically I was looking at doing something similar to what already exists with Facebook, Instagram, Youtube, and other popular sites are doing. And that's creating a platform that allows for Users to sign up, follow others, and post media (videos and pictures) that can be liked, and commented on. For items that could be growing, like Followers, or Likes, I originally stored as Lists in their respective tables; however, I realize that may not be the best approach as DynamoDB does have a data limit. For example, if someone like Kobe Bryant joined the app, and immediately got millions of followers, the list approach may not be best.
Like this,
Media:
- MediaID
- UserID
- MediaType
- Size
- S3_URL
- Likes: {
...
...
}
- Comments: {
...
...
}
Would it be better to store things like this in separate tables? Or am I now thinking back to relational databases?
For example,
Media:
- MediaID
- UserID
- MediaType
- Size
- S3_URL
Media_Likes:
- LikeID
- MediaID
- UserID
- DateLiked
Media_Comments:
- CommentID
- MediaID
- UserID
- Text
- DateCommented
Or what else would be the best way to design something like this?

Generate Symfony2 fixtures from DB?

Is it possible to generate fixtures from an existing DB in Symfony2/Doctrine? How could I do that?
Example:
I have defined 15 entities and my symfony2 application is working. Now some people are able to browse to the application and by using it it had inserted about 5000 rows until now. Now I want the stuff inserted as fixtures, but I don’t want to do this by hand. How can I generate them from the DB?
There's no direct manner within Doctrine or Symfony2, but writing a code generator for it (either within or outside of sf2) would be trivial. Just pull each property and generate a line of code to set each property, then put it in your fixture loading method. Example:
<?php
$i = 0;
$entities = $em->getRepository('MyApp:Entity')->findAll();
foreach($entities as $entity)
{
$code .= "$entity_{$i} = new MyApp\Entity();\n";
$code .= "$entity_{$i}->setMyProperty('" . addslashes($entity->getMyProperty()); . "'); \n");
$code .= "$manager->persist($entity_{$i}); \n $manager->flush();";
++$i;
}
// store code somewhere with file_put_contents
As I understand your question, you have two databases: the first is already in production and filled with 5000 rows, the second one is a new database you want to use for new test and development. Is that right ?
If it is, I suggest you to create in you test environment two entity manager: the first will be the 'default' one, which will be used in your project (your controllers, etc.). The second one will be used to connect to your production database. You will find here how to deal with multiple entity manager : http://symfony.com/doc/current/cookbook/doctrine/multiple_entity_managers.html
Then, you should create a Fixture class which will have access to your container. There is an "how to" here : http://symfony.com/doc/current/bundles/DoctrineFixturesBundle/index.html#using-the-container-in-the-fixtures.
Using the container, you will have access to both entity manager. And this is the 'magic': you will have to retrieve the object from your production database, and persist them in the second entity manager, which will insert them in your test database.
I point your attention to two points:
If there are relationship between object, you will have to take care to those dependencies: owner side, inversed side, ...
If you have 5000 rows, take care on the memory your script will use. Another solution may be use native sql to retrieve all the rows from your production database and insert them in your test database. Or a SQL script...
I do not have any code to suggest to you, but I hope this idea will help you.
I assume that you want to use fixtures (and not just dump the production or staging database in the development database) because a) your schema changes and the dumps would not work if you update your code or b) you don't want to dump the hole database but only want to extend some custom fixtures. An example I can think of is: you have 206 countries in your staging database and users add cities to those countries; to keep the fixtures small you only have 5 countries in your development database, however you want to add the cities that the user added to those 5 countries in the staging database to the development database
The only solution I can think of is to use the mentioned DoctrineFixturesBundle and multiple entity managers.
First of all you should configure two database connections and two entity managers in your config.yml
doctrine:
dbal:
default_connection: default
connections:
default:
driver: %database_driver%
host: %database_host%
port: %database_port%
dbname: %database_name%
user: %database_user%
password: %database_password%
charset: UTF8
staging:
...
orm:
auto_generate_proxy_classes: %kernel.debug%
default_entity_manager: default
entity_managers:
default:
connection: default
mappings:
AcmeDemoBundle: ~
staging:
connection: staging
mappings:
AcmeDemoBundle: ~
As you can see both entity managers map the AcmeDemoBundle (in this bundle I will put the code to load the fixtures). If the second database is not on your development machine, you could just dump the SQL from the other machine to the development machine. That should be possible since we are talking about 500 rows and not about millions of rows.
What you can do next is to implement a fixture loader that uses the service container to retrieve the second entity manager and use Doctrine to query the data from the second database and save it to your development database (the default entity manager):
<?php
namespace Acme\DemoBundle\DataFixtures\ORM;
use Doctrine\Common\DataFixtures\FixtureInterface;
use Doctrine\Common\Persistence\ObjectManager;
use Symfony\Component\DependencyInjection\ContainerAwareInterface;
use Symfony\Component\DependencyInjection\ContainerInterface;
use Acme\DemoBundle\Entity\City;
use Acme\DemoBundle\Entity\Country;
class LoadData implements FixtureInterface, ContainerAwareInterface
{
private $container;
private $stagingManager;
public function setContainer(ContainerInterface $container = null)
{
$this->container = $container;
$this->stagingManager = $this->container->get('doctrine')->getManager('staging');
}
public function load(ObjectManager $manager)
{
$this->loadCountry($manager, 'Austria');
$this->loadCountry($manager, 'Germany');
$this->loadCountry($manager, 'France');
$this->loadCountry($manager, 'Spain');
$this->loadCountry($manager, 'Great Britain');
$manager->flush();
}
protected function loadCountry(ObjectManager $manager, $countryName)
{
$country = new Country($countryName);
$cities = $this->stagingManager->createQueryBuilder()
->select('c')
->from('AcmeDemoBundle:City', 'c')
->leftJoin('c.country', 'co')
->where('co.name = :country')
->setParameter('country', $countryName)
->getQuery()
->getResult();
foreach ($cities as $city) {
$city->setCountry($country);
$manager->persist($city);
}
$manager->persist($country);
}
}
What I did in the loadCountry method was that I load the objects from the staging entity manager, add a reference to the fixture country (the one that already exists in your current fixtures) and persist it using the default entity manager (your development database).
Sources:
DoctrineFixturesBundle
How to work with Multiple Entity Managers
you could use https://github.com/Webonaute/DoctrineFixturesGeneratorBundle
It add ability to generate fixtures for single entity using commands like
$ php bin/console doctrine:generate:fixture --entity=Blog:BlogPost --ids="12 534 124" --name="bug43" --order="1"
Or you can create full snapshot
php app/console doctrine:generate:fixture --snapshot --overwrite
The Doctrine Fixtures are useful because they allow you to create objects and insert them into the database. This is especially useful when you need to create associations or say, encode a password using one of the password encoders. If you already have the data in a database, you shouldn't really need to bring them out of that format and turn it into PHP code, only to have that PHP code insert the same data back into the database. You could probably just do an SQL dump and then re-insert them into your database again that way.
Using a fixture would make more sense if you were initiating your project but wanted to use user input to create it. If you had in your config file the default user, you could read that and insert the object.
The AliceBundle can help you doing this. Indeed it allows to load fixtures with YAML (or PHP array) files.
For instance you can define your fixtures with:
Nelmio\Entity\Group:
group1:
name: Admins
owner: '#user1->id'
Or with the same structure in a PHP array. It's WAY easier than generating working PHP code.
It also supports references:
Nelmio\Entity\User:
# ...
Nelmio\Entity\Group:
group1:
name: Admins
owner: '#user1'
In the doctrine_fixture cookbook, you can see in the last example how to get the service container in your entity.
With this service container, you can retrieve the doctrine service, then the entity manager. With the entity manager, you will be able to get all the data from your database you need.
Hope this will help you!