Creating a test database with Prisma - unit-testing

I'm using vitest to test my sveltekit+Prisma app with sqlite backend. For some tests, mocking Prisma is not enough and I need to populate a test database with test data.
What is the proper approach of using Prisma to create a test database? My schema specifies the data source:
datasource db {
provider = "sqlite"
url = env("DATABASE_URL")
}
Can I somehow override that with another url, and run npx prisma db push (or migrate) before running the test suite?

I have run into sane issue once, I was using nestjs and I ended up using env file for testing.
it was something like this:
"test:e2e": "env-cmd -f .env.test npx prisma db push"
and in the schema I used it :
generator client {
provider = "prisma-client-js"
output = env("output")
}
datasource db {
provider = "mysql"
url = env("DATABASE_URL")
}
the output in client to specifies where to generate the prisma client.
in the .env.test I have put the testing db URL.

Related

Django test to use Postgres extension without migration

I have an existing project that I want to start implementing test step. There are quite a few data migrations happened in the history and I don't want to spend the effort to make them run in test setup. So I have disabled 'migrations':
DATABASES['default']['TEST'] = {
'MIGRATE': False,
}
However this Postgres DB makes use of some extensions
class Meta:
verbose_name = 'PriceBook Cache'
verbose_name_plural = 'PriceBook Caches'
indexes = [
GinIndex(
name="pricebookcache_gin_trgm_idx",
fields=['itemid', 'itemdescription', 'manufacturerpartnumber', 'vendor_id'],
opclasses=['gin_trgm_ops', 'gin_trgm_ops', 'gin_trgm_ops', 'gin_trgm_ops']
),
]
Which resulted error when I run the test
psycopg2.errors.UndefinedObject: operator class "gin_trgm_ops" does not exist for access method "gin"
I have had a look at here https://docs.djangoproject.com/en/4.1/ref/databases/#migration-operation-for-adding-extensions which specifically said done via migration but which I have disabled.
An alternative is using template db but I don't really want to as this will be run automatically in gitlab using docker container and I don't want to maintain another fixture outside the project repo.
So is there a way to initialise the database without running the migrations or is it possible to make it run completely different migration just for the test?

ef core migration can't use secret manager

When I create .net core web applications, I use the secret manager during testing. I am generally able to create a new web project (mvc and web api), right click on the project and select "manage user secrets". This opens a json file where I add the secrets. I then use this in my startup.cs something like this:
services.AddDbContext<ApplicationDbContext>(options =>
options.UseMySql(Configuration["connectionString"]));
The website works fine with this and connects well to the database. However when I try using ef core migration commands such as add-migration, they don't seem to be able to access the connection string from the secret manager. I get the error saying "connection string can't be null". The error is gone when I hard code Configuration["connectionString"] with the actual string. I have checked online and checked the .csproj file, they already contain the following lines:
<UserSecretsId>My app name</UserSecretsId>
And later:
<ItemGroup>
<DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.1" />
<DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="2.0.0" />
Is there anything I need to add so the migrations can access the connection string?
Update
I only have one constructor in the context class:
public ApplicationDBContext(DbContextOptions<ApplicationDBContext> options) : base(options)
{
}
I am currently coming across this exact problem as well. I have come up with a solution that works for now, but one may consider messy at best.
I have created a Configuration Class that provides the Configuration Interface when requested:
public static class Configuration
{
public static IConfiguration GetConfiguration()
{
return new ConfigurationBuilder()
.AddJsonFile("appsettings.json", true, true)
.AddUserSecrets<Startup>()
.AddEnvironmentVariables()
.Build();
}
}
In the Migration, you can then get the Configuration File and access its UserSecrets like this:
protected override void Up(MigrationBuilder migrationBuilder)
{
var conf = Configuration.GetConfiguration();
var secret = conf["Secret"];
}
I have tested creating a SQL Script with these User Secrets, and it works (you obviously wouldn't want to keep the Script laying around since it would expose the actual secret).
Update
The above config can also be set up into Program.cs class in the BuildWebHost method:
var config = new ConfigurationBuilder().AddUserSecrets<Startup>().Build();
return WebHost.CreateDefaultBuilder(args).UseConfiguration(config)...Build()
Or in the Startup Constructor if using that Convention
Update 2 (explanation)
It turns out this issue is because the migration scripts runs with the environment set to "Production". The secret manager is pre-set to only work in "Development" environment (for a good reason). The .AddUserSecrets<Startup>() function simply adds the secrets for all environment.
To ensure that this isn't set to your production server, there are two solutions I have noticed, one is suggested here: https://learn.microsoft.com/en-us/ef/core/miscellaneous/cli/powershell
Set env:ASPNETCORE_ENVIRONMENT before running to specify the ASP.NET Core environment.
This solution would mean there is no need to set .AddUserSecrets<Startup>() on every project created on the computer in future. However if you happen to be sharing this project across other computers, this needs to be configured on each computer.
The second solution is to set the .AddUserSecrets<Startup>() only on debug build like this:
return new ConfigurationBuilder()
.AddJsonFile("appsettings.json", true, true)
#if DEBUG
.AddUserSecrets<Startup>()
#endif
.AddEnvironmentVariables()
.Build();
Additional Info
The Configuration Interface can be passed to Controllers in their Constructor, i.e.
private readonly IConfiguration _configuration;
public TestController(IConfiguration configuration)
{
_configuration = configuration;
}
Thus, any Secrets and Application Setting are accessible in that Controller by accessing _configuration["secret"].
However, if you want to access Application Secrets from, for example, a Migration-File, which exists outside of the Web Application itself, you need to adhere to the original answer because there's no easy way (that I know of) to access those secrets otherwise (one use case I can think of would be seeding the Database with an Admin and a Master Password).
To use migrations in NetCore with user secrets we can also set a class (SqlContextFactory) to create its own instance of the SqlContext using a specified config builder. This way we do not have to create some kind of workaround in our Program or Startup classes. In the below example SqlContext is an implementation of DbContext/IdentityDbContext.
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Design;
using Microsoft.Extensions.Configuration;
public class SqlContextFactory : IDesignTimeDbContextFactory<SqlContext>
{
public SqlContext CreateDbContext(string[] args)
{
var config = new ConfigurationBuilder()
.AddJsonFile("appsettings.json", optional: false)
.AddUserSecrets<Startup>()
.AddEnvironmentVariables()
.Build();
var builder = new DbContextOptionsBuilder<SqlContext>();
builder.UseSqlServer(config.GetConnectionString("DefaultConnection"));
return new SqlContext(builder.Options);
}
}
Since I have noticed a lot of people running into this confusion, I am writing a simplified version of this resolution.
The Problem/Confusion
The secret manager in .net core is designed to work only in the Development environment. When running your app, your launchSettings.json file ensures that your ASPNETCORE_ENVIRONMENT variable is set to "Development". However, when you run EF migrations it doesn't use this file. As a result, when you run migrations, your web app does not run on the Development environment and thus no access to the secret manager. This often causes confusion as to why EF migrations can't use the secret manager.
The Resolution
Make sure your environment variable "ASPNETCORE_ENVIRONMENT" is set to "Development" in your computer.
The way of using .AddUserSecrets<Startup>() will make a circular reference if we having our DbContext in a separate class library and using DesignTimeFactory
The clean way of doing that is:
public class DesignTimeDbContextFactory : IDesignTimeDbContextFactory<AppDbContext>
{
public AppDbContext CreateDbContext(string[] args)
{
var configuration = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
#if DEBUG
.AddJsonFile(#Directory.GetCurrentDirectory() +
"{project path}/appsettings.Development.json",
optional: true, reloadOnChange: true)
#else
.AddJsonFile(#Directory.GetCurrentDirectory() +
"{startup project path}/appsettings.json",
optional: true, reloadOnChange: true)
#endif
.AddEnvironmentVariables()
.Build();
var connectionString = configuration.GetConnectionString("DefaultConnection");
var builder = new DbContextOptionsBuilder<AppDbContext>();
Console.WriteLine(connectionString);
builder.UseSqlServer(connectionString);
return new AppDbContext(builder.Options);
}
}
The Explanation:
Secret Manager is meant to be in the development time only, so this will not affect the migration in case if you have it in a pipeline in QA or Production stages, so to fix that we will use the dev connection string which exists in appsettings.Development.json during the #if Debug.
The benefit of using this way is to decouple referencing the Web project Startup class while using class library as your Data infrastructure.

How to unit test a lambda function that includes a dynamoDB query

I have a function in my Alexa skill's lambda function that I am trying to do a unit test for using the aws-lambda-mock-context node package. The method I am trying to test includes a call to DynamoDB to check if an item exists in my table.
At the moment, my test immediately fails with CredentialsError: Missing credentials in config. Following this blog, I tried to manually enter my Amazon IAM credentials into a .aws/credentials file. Testing with the credentials leads to the test running for 30+ seconds before timing out, with no success or fail result from DynamoDB. I am not sure where to go from here.
The function I am looking to unit test looks like this:
helper.prototype.checkForItem = function(alexa) {
var registration_id = 123;
var params = {
TableName: 'registrations',
Key: {
id: {"N" : registration_id}
}
};
return this.getItemFromDB(params).then(function(data) {
//...
}
And the call to DynamoDB:
helper.prototype.getItemFromDB = function(params) {
return new Promise(function(fulfill, reject) {
dynamoDB.getItem(params, function(err, data) {
if (err == null) {
console.log("fulfilled");
fulfill(data);
}
else {
console.log("error recieving data " + err);
reject(null);
}
});
});
}
You can use SAM Local to test you lambda:
AWS SAM is a fast and easy way of deploying your serverless
applications, allowing you to write simple templates to describe your
functions and their event sources (Amazon API Gateway, Amazon S3,
Kinesis, and so on). Based on AWS SAM, SAM Local is an AWS CLI tool
that provides an environment for you to develop, test, and analyze
your serverless applications locally before uploading them to the
Lambda runtime. Whether you're developing on Linux, Mac, or Microsoft
Windows, you can use SAM Local to create a local testing environment
that simulates the AWS runtime environment. Doing so helps you address
issues such as performance. Working with SAM Local also allows faster,
iterative development of your Lambda function code because there is no
need to redeploy your application package to the AWS Lambda runtime.
For more information, see Building a Simple Application Using SAM
Local.
if you want to do unit testing you can mock dynamo db endpoint using any mocking library like nock, also you can check fiddler request/ response what your app is making to dynamo db endpoint and then accordingly you can troubleshoot.

Generate Symfony2 fixtures from DB?

Is it possible to generate fixtures from an existing DB in Symfony2/Doctrine? How could I do that?
Example:
I have defined 15 entities and my symfony2 application is working. Now some people are able to browse to the application and by using it it had inserted about 5000 rows until now. Now I want the stuff inserted as fixtures, but I don’t want to do this by hand. How can I generate them from the DB?
There's no direct manner within Doctrine or Symfony2, but writing a code generator for it (either within or outside of sf2) would be trivial. Just pull each property and generate a line of code to set each property, then put it in your fixture loading method. Example:
<?php
$i = 0;
$entities = $em->getRepository('MyApp:Entity')->findAll();
foreach($entities as $entity)
{
$code .= "$entity_{$i} = new MyApp\Entity();\n";
$code .= "$entity_{$i}->setMyProperty('" . addslashes($entity->getMyProperty()); . "'); \n");
$code .= "$manager->persist($entity_{$i}); \n $manager->flush();";
++$i;
}
// store code somewhere with file_put_contents
As I understand your question, you have two databases: the first is already in production and filled with 5000 rows, the second one is a new database you want to use for new test and development. Is that right ?
If it is, I suggest you to create in you test environment two entity manager: the first will be the 'default' one, which will be used in your project (your controllers, etc.). The second one will be used to connect to your production database. You will find here how to deal with multiple entity manager : http://symfony.com/doc/current/cookbook/doctrine/multiple_entity_managers.html
Then, you should create a Fixture class which will have access to your container. There is an "how to" here : http://symfony.com/doc/current/bundles/DoctrineFixturesBundle/index.html#using-the-container-in-the-fixtures.
Using the container, you will have access to both entity manager. And this is the 'magic': you will have to retrieve the object from your production database, and persist them in the second entity manager, which will insert them in your test database.
I point your attention to two points:
If there are relationship between object, you will have to take care to those dependencies: owner side, inversed side, ...
If you have 5000 rows, take care on the memory your script will use. Another solution may be use native sql to retrieve all the rows from your production database and insert them in your test database. Or a SQL script...
I do not have any code to suggest to you, but I hope this idea will help you.
I assume that you want to use fixtures (and not just dump the production or staging database in the development database) because a) your schema changes and the dumps would not work if you update your code or b) you don't want to dump the hole database but only want to extend some custom fixtures. An example I can think of is: you have 206 countries in your staging database and users add cities to those countries; to keep the fixtures small you only have 5 countries in your development database, however you want to add the cities that the user added to those 5 countries in the staging database to the development database
The only solution I can think of is to use the mentioned DoctrineFixturesBundle and multiple entity managers.
First of all you should configure two database connections and two entity managers in your config.yml
doctrine:
dbal:
default_connection: default
connections:
default:
driver: %database_driver%
host: %database_host%
port: %database_port%
dbname: %database_name%
user: %database_user%
password: %database_password%
charset: UTF8
staging:
...
orm:
auto_generate_proxy_classes: %kernel.debug%
default_entity_manager: default
entity_managers:
default:
connection: default
mappings:
AcmeDemoBundle: ~
staging:
connection: staging
mappings:
AcmeDemoBundle: ~
As you can see both entity managers map the AcmeDemoBundle (in this bundle I will put the code to load the fixtures). If the second database is not on your development machine, you could just dump the SQL from the other machine to the development machine. That should be possible since we are talking about 500 rows and not about millions of rows.
What you can do next is to implement a fixture loader that uses the service container to retrieve the second entity manager and use Doctrine to query the data from the second database and save it to your development database (the default entity manager):
<?php
namespace Acme\DemoBundle\DataFixtures\ORM;
use Doctrine\Common\DataFixtures\FixtureInterface;
use Doctrine\Common\Persistence\ObjectManager;
use Symfony\Component\DependencyInjection\ContainerAwareInterface;
use Symfony\Component\DependencyInjection\ContainerInterface;
use Acme\DemoBundle\Entity\City;
use Acme\DemoBundle\Entity\Country;
class LoadData implements FixtureInterface, ContainerAwareInterface
{
private $container;
private $stagingManager;
public function setContainer(ContainerInterface $container = null)
{
$this->container = $container;
$this->stagingManager = $this->container->get('doctrine')->getManager('staging');
}
public function load(ObjectManager $manager)
{
$this->loadCountry($manager, 'Austria');
$this->loadCountry($manager, 'Germany');
$this->loadCountry($manager, 'France');
$this->loadCountry($manager, 'Spain');
$this->loadCountry($manager, 'Great Britain');
$manager->flush();
}
protected function loadCountry(ObjectManager $manager, $countryName)
{
$country = new Country($countryName);
$cities = $this->stagingManager->createQueryBuilder()
->select('c')
->from('AcmeDemoBundle:City', 'c')
->leftJoin('c.country', 'co')
->where('co.name = :country')
->setParameter('country', $countryName)
->getQuery()
->getResult();
foreach ($cities as $city) {
$city->setCountry($country);
$manager->persist($city);
}
$manager->persist($country);
}
}
What I did in the loadCountry method was that I load the objects from the staging entity manager, add a reference to the fixture country (the one that already exists in your current fixtures) and persist it using the default entity manager (your development database).
Sources:
DoctrineFixturesBundle
How to work with Multiple Entity Managers
you could use https://github.com/Webonaute/DoctrineFixturesGeneratorBundle
It add ability to generate fixtures for single entity using commands like
$ php bin/console doctrine:generate:fixture --entity=Blog:BlogPost --ids="12 534 124" --name="bug43" --order="1"
Or you can create full snapshot
php app/console doctrine:generate:fixture --snapshot --overwrite
The Doctrine Fixtures are useful because they allow you to create objects and insert them into the database. This is especially useful when you need to create associations or say, encode a password using one of the password encoders. If you already have the data in a database, you shouldn't really need to bring them out of that format and turn it into PHP code, only to have that PHP code insert the same data back into the database. You could probably just do an SQL dump and then re-insert them into your database again that way.
Using a fixture would make more sense if you were initiating your project but wanted to use user input to create it. If you had in your config file the default user, you could read that and insert the object.
The AliceBundle can help you doing this. Indeed it allows to load fixtures with YAML (or PHP array) files.
For instance you can define your fixtures with:
Nelmio\Entity\Group:
group1:
name: Admins
owner: '#user1->id'
Or with the same structure in a PHP array. It's WAY easier than generating working PHP code.
It also supports references:
Nelmio\Entity\User:
# ...
Nelmio\Entity\Group:
group1:
name: Admins
owner: '#user1'
In the doctrine_fixture cookbook, you can see in the last example how to get the service container in your entity.
With this service container, you can retrieve the doctrine service, then the entity manager. With the entity manager, you will be able to get all the data from your database you need.
Hope this will help you!

How do I run a unit test against the production database?

How do I run a unit test against the production database instead of the test database?
I have a bug that's seems to occur on my production server but not on my development computer.
I don't care if the database gets trashed.
Is it feasible to make a copy the database, or part of the database that causes the problem? If you keep a backup server, you might be able to copy the data from there instead (make sure you have another backup, in case you messed the backup database).
Basically, you don't want to mess with live data and you don't want to be left with no backup in case you mess something up (and you will!).
Use manage.py dumpdata > mydata.json to get a copy of the data from your database.
Go to your local machine, copy mydata.json to a subdirectory of your app called fixtures e.g. myapp/fixtures/mydata.json and do:
manage.py syncdb # Set up an empty database
manage.py loaddata mydata.json
Your local database will be populated with data and you can test away.
Make a copy the database... It's really a good practices!!
Just execute the test, instead call commit, call rollback at the end of.
The first thing to try should be manually executing the test code on the shell, on the production server.
python manage.py shell
If that doesn't work, you may need to dump the production data, copy it locally and use it as a fixture for the testcase you are using.
If there is a way to ask django to use the standard database without creating a new one, I think rather than creating a fixture, you can do a sqldump which will generally be a much smaller file.
Short answer: you don't.
Long answer: you don't, you make a copy of the production database and run it there
If you really don't care about trashing the db, then Marco's answer of rolling back the transaction is my preferred choice as well. You could also try NdbUnit but I personally don't think the extra baggage it brings is worth the gains.
How do you test the test db now? By test db do you mean SQLite?
HTH,
Berryl
I have both a full-on-slow-django-test-db suite and a crazy-fast-runs-against-production test suite built from a common test module. I use the production suite for sanity checking my changes during development and as a commit validation step on my development machine. The django suite module looks like this:
import django.test
import my_test_module
...
class MyTests(django.test.TestCase):
def test_XXX(self):
my_test_module.XXX(self)
The production test suite module uses bare unittest and looks like this:
import unittest
import my_test_module
class MyTests(unittest.TestCase):
def test_XXX(self):
my_test_module.XXX(self)
suite = unittest.TestLoader().loadTestsFromTestCase(MyTests)
unittest.TextTestRunner(verbosity=2).run(suite)
The test module looks like this:
def XXX(testcase):
testcase.assertEquals('foo', 'bar')
I run the bare unittest version like this, so my tests in either case have the django ORM available to them:
% python manage.py shell < run_unit_tests
where run_unit_tests consists of:
import path.to.production_module
The production module needs a slightly different setUp() and tearDown() from the django version, and you can put any required table cleaning in there. I also use the django test client in the common test module by mimicking the test client class:
class FakeDict(dict):
"""
class that wraps dict and provides a getlist member
used by the django view request unpacking code, used when
passing in a FakeRequest (see below), only needed for those
api entrypoints that have list parameters
"""
def getlist(self, name):
return [x for x in self.get(name)]
class FakeRequest(object):
"""
an object mimicing the django request object passed in to views
so we can test the api entrypoints from the developer unit test
framework
"""
user = get_test_user()
GET={}
POST={}
Here's an example of a test module function that tests via the client:
def XXX(testcase):
if getattr(testcase, 'client', None) is None:
req_dict = FakeDict()
else:
req_dict = {}
req_dict['param'] = 'value'
if getattr(testcase, 'client', None) is None:
fake_req = FakeRequest()
fake_req.POST = req_dict
resp = view_function_to_test(fake_req)
else:
resp = testcase.client.post('/path/to/function_to_test/', req_dict)
...
I've found this structure works really well, and the super-speedy production version of the suite is a major time-saver.
If you database supports template databases, use the production database as a template database. Ensure that you Django database user has sufficient permissions.
If you are using PostgreSQL, you can easily do this specifying the name of your production database as POSTGIS_TEMPLATE(and use the PostGIS backend).