Using TypeORM migrations, how do I specify a particular connection? - database-migration

I'm using TypeORM and trying to run a migration on a test connection. In my ormconfig.json, I specify two separate connections as follows:
[{
"name": "default",
"type": "postgres",
"host": "localhost",
"port": 5432,
"username": "username",
"password": "",
"database": "database",
"entities": [
"build/entity/**/*.js"
],
"migrations": [
"build/migration/**/*.js"
],
"synchronize": false,
"autoSchemaSync": true,
"logging": false,
"cli": {
"migrationsDir": "src/migration",
"entitiesDir": "src/entity",
"subscribersDir": "src/subscriber"
}
},
{
"name": "test",
"type": "postgres",
"host": "localhost",
"port": 5432,
"username": "username",
"password": "",
"database": "database-test",
"entities": [
"build/entity/**/*.js"
],
"migrations": [
"build/migration/**/*.js"
],
"synchronize": false,
"autoSchemaSync": true,
"logging": false,
"cli": {
"migrationsDir": "src/migration",
"entitiesDir": "src/entity",
"subscribersDir": "src/subscriber"
}
}]
How do I specify the connection with name test from the TypeORM CLI? I'm trying things like:
typeorm migrations:run -c test
but I'm not having any luck. Is there a better way to do this?

Though you said you weren't having luck with that, that's exactly what I do. Perhaps some supporting information will help. My exact migration command looks like this (I'm using TypeScript and so I'm running things through ts-node first):
$(npm bin)/ts-node $(npm bin)/typeorm migration:run -c test
in my ormconfig.json, I've specified that there's a default and test connection, just like yours.
Perhaps it's as simple as saying "migration" rather than "migrations" like you have? When I use "migrations" I just get the help information printed out. Is that what you're seeing?

Related

AWS service can't start task, but starting task manually works

Until now I had a backend running single tasks. I now want to switch to services starting my tasks. For two of the tasks I need direct access to them so I tried using ServiceConnect.
When I run this task standalone it starts. When I start a service without ServiceConnect with the same task inside it also starts. When I enable ServiceConnect I get this error message inside of the 'Deployments and events' tab in the service:
service (...) was unable to place a task because no container instance met all of its requirements.
The closest matching container-instance (...) is missing an attribute required by your task.
For more information, see the Troubleshooting section of the Amazon ECS Developer Guide.
When I check the attributes of all free containers with:
ecs-cli check-attributes --task-def some-task-definition --container-instances ... --cluster some-cluster
I just get:
Container Instance Missing Attributes
heyvie-backend-dev None
My task definition looks like that:
{
"family": "some-task-definition",
"taskRoleArn": "arn:aws:iam::...:role/ecsTaskExecutionRole",
"executionRoleArn": "arn:aws:iam::...:role/ecsTaskExecutionRole",
"networkMode": "awsvpc",
"cpu": "1024",
"memory": "982",
"containerDefinitions": [
{
"name": "...",
"image": "...",
"essential": true,
"healthCheck": {
"command": ["..."],
"startPeriod": 20,
"retries": 3
},
"portMappings": [
{
"name": "somePortName",
"containerPort": 4321
}
],
"mountPoints": [
{
"sourceVolume": "...",
"containerPath": "..."
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "...",
"awslogs-region": "eu-...",
"awslogs-stream-prefix": "..."
}
}
}
],
"volumes": [
{
"name": "...",
"efsVolumeConfiguration": {
"fileSystemId": "...",
"rootDirectory": "/",
"transitEncryption": "ENABLED"
}
}
],
"requiresCompatibilities": ["EC2"]
}
My service definition looks like that:
{
"cluster": "some-cluster",
"serviceName": "...",
"taskDefinition": "some-task-definition",
"desiredCount": 1,
"launchType": "EC2",
"deploymentConfiguration": {
"maximumPercent": 100,
"minimumHealthyPercent": 0
},
"placementConstraints": [
{
"type": "distinctInstance"
}
],
"networkConfiguration": {
"awsvpcConfiguration": {
"subnets": [
...
],
"securityGroups": ["..."],
"assignPublicIp": "DISABLED"
}
},
"serviceConnectConfiguration": {
"enabled": true,
"namespace": "someNamespace",
"services": [
{
"portName": "somePortName",
"clientAliases": [
{
"port": 4321
}
]
}
]
},
"schedulingStrategy": "REPLICA",
"enableECSManagedTags": true,
"propagateTags": "SERVICE"
}
I also added this to the user data of my launch template:
#!/bin/bash
cat <<'EOF' >> /etc/ecs/ecs.config
ECS_ENABLE_TASK_IAM_ROLE=true
ECS_CLUSTER=some-cluster
EOF
Did anyone experience something similiar or does know what could cause that issue?
I used ServiceDiscovery, I think, it's the easiest way to replace a dynamic ip address of a task in a service (on every restart the ip address changes and that's probably what you're trying to avoid?).
With ServiceDiscovery you are creating a new DNS record and instead of ip-address:port you can just use serviceNameOfNamespace.namespace. to connect to a task. ServiceDiscovery worked without any problem on an existing task.
Hope that helps, I don't really know if there are any benefits for ServiceConnect except for higher connection counts and retry functionalities, so if anybody knows more about differences between those I'm happy to learn.

How can I use Amazon SES with Ghost?

I saw at https://ghost.org/docs/config/#mail that SES is allowed.
But I edited my config.production.json and ran ghost restart, but Ghost still says:
Set up Mailgun to start sending newsletters!
The config I used was:
"mail": {
"from": "My Name <example#example.com>",
"transport": "SMTP",
"options": {
"host": "email-smtp.us-east-1.amazonaws.com",
"port": 465,
"service": "SES",
"auth": {
"user": "asdfadsffdsf",
"pass": "asdfasdfadfs"
}
}
},
I got the SMTP-ACCESS-KEY-ID and SES-SMTP-SECRET-ACCESS-KEY from https://us-east-1.console.aws.amazon.com/ses/home?region=us-east-1#/smtp
What did I do wrong?
I also had this issue.
I fixed it by using the below config for Ghost.
"mail": {
"transport": "SMTP",
"options": {
"host": "email-smtp.us-east-1.amazonaws.com",
"port": 2465,
"secure": true,
"service": "SES",
"from": "'Name' email",
"auth": {
"user": "access-key",
"pass": "secret-access-key"
}
}
}
Notice some changes from your config:
"port": 2465,
added "secure": true,
use the IAM secret access key as it is. (Don't convert to SMTP password.)

posixAccounts API information missing

I'm not seeing my posixAccounts information from the following link:
https://developers.google.com/admin-sdk/directory/reference/rest/v1/users/get
{
"kind": "admin#directory#user",
"id": "8675309",
"etag": "\"UUID\"",
"primaryEmail": "email#example.com",
"name": {
"givenName": "Email",
"familyName": "Account",
"fullName": "Email Account"
},
"isAdmin": true,
"isDelegatedAdmin": false,
"lastLoginTime": "2021-08-04T21:11:17.000Z",
"creationTime": "2021-06-16T14:32:35.000Z",
"agreedToTerms": true,
"suspended": false,
"archived": false,
"changePasswordAtNextLogin": false,
"ipWhitelisted": false,
"emails": [
{
"address": "email#example.com",
"primary": true
},
{
"address": "email#example.com.test-google-a.com"
}
],
"phones": [
{
"value": "123-456-7890",
"type": "work"
}
],
"nonEditableAliases": [
"email#example.com.test-google-a.com"
],
"customerId": "id12345",
"orgUnitPath": "/path/to/org",
"isMailboxSetup": true,
"isEnrolledIn2Sv": false,
"isEnforcedIn2Sv": false,
"includeInGlobalAddressList": true
}
As you can see from the above output, there's no posixAccount information. I can open the ldap information in Apache Directory studio, so I know it's there, but I can't see it from the above output. Since I can see it though, I tried to update this using the update function in the API.
https://developers.google.com/admin-sdk/directory/reference/rest/v1/users/update
I used this for the payload as I'm just testing updating the gid information. I used the documentation below to get the entry details needed. At least as far as I could tell.
{
"posixAccounts": [
{
"gid": "12345",
}
]
}
https://developers.google.com/admin-sdk/directory/reference/rest/v1/users
I'm getting a 200 response, but nothing is actually changing for the user when doing a PUT to update.
I tried a similar update method from another user on here, but no avail: Google Admin SDK - Create posix attributes on existing user
I was able to get this resolved by supplying additional details in my PUT request:
{
"posixAccounts": [
{
"username": "email(excluding #domain.com)",
"uid": "1234",
"gid": "12345",
"operatingSystemType": "unspecified",
"shell": "/bin/bash",
"gecos": "Firstname Lastname"
"systemId": ""
}
]
}
The above wouldn't reflect in LDAP until I put "systemId" in there. So that part is required.

User model giving email validation error in loopback3

I have created a model MyUser based on the User model as follows:
{
"name": "MyUser",
"base": "User",
"idInjection": true,
"options": {
"validateUpsert": true
},
"ownerRelations": true,
"emailVerificationRequired": true,
"hidden": [
"email",
"emailVerified"
],
"properties": {
"firstName": {
"type": "string",
"required": true
},
"lastName": {
"type": "string",
"required": true
},
"email": {
"type": "string",
"required": true
}
},
"validations": [],
"relations": {
"accessTokens": {
"type": "hasMany",
"model": "CustomAccessToken",
"polymorphic": {
"foreignKey": "userId",
"discriminator": "principalType"
},
"options": {
"disableInclude": true
}
}
},
"acls": [
{
"accessType": "*",
"principalType": "ROLE",
"principalId": "$everyone",
"permission": "DENY"
},
{
"principalType": "ROLE",
"principalId": "$everyone",
"permission": "ALLOW",
"property": "create"
}
],
"methods": {}
}
It works perfectly on my local server. But when I host it on aws ec2, only the first time user is created. Next time even if I give different email id, I get email verification error as follows:
The 'MyUser' instance is not valid. Details:emailEmail already exists (value: undefined).
Now if I delete the 1st record and try to create user, it works. So, only 1st record is getting inserted and post that for any unique email also, I am getting validation error. I am not sure why this is happening.
I was able to figure out the problem. I have kept email as a hidden field. loopback has "prohibitHiddenPropertiesInQuery": true by default which does not allow hidden properties in query. As a result, during user validation, email was ignored in the query in loopback-juggler/lib/validation.js. Upon setting prohibitHiddenPropertiesInQuery to false, it works.
However, I am still not sure why it was working in development mode on my local server. prohibitHiddenPropertiesInQuery is by default true, so it should not have worked in development mode also. If anyone have any answer to this, please do let me know.

"Invalid configuration for registry" error when executing "eb local run"

I think this is a very easy to fix problem, but I just can't seem to solve it! I've spent a good amount of time looking for any leads on Google/SO but couldn't find a solution.
When executing eb local run, I'm getting this error:
Invalid configuration for registry
$ eb local run
ERROR: InvalidConfigFile :: Invalid configuration for registry 12345678.dkr.ecr.eu-west-1.amazonaws.com
The image lines in my Dockerrun.aws.json are as follows:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "frontend",
"host": {
"sourcePath": "/var/app/current/frontend"
}
},
{
"name": "backend",
"host": {
"sourcePath": "/var/app/current/backend"
}
},
{
"name": "nginx-proxy-conf",
"host": {
"sourcePath": "/var/app/current/config/nginx"
}
},
{
"name": "nginx-proxy-content",
"host": {
"sourcePath": "/var/app/current/content/"
}
},
{
"name": "nginx-proxy-ssl",
"host": {
"sourcePath": "/var/app/current/config/ssl"
}
}
],
"containerDefinitions": [
{
"name": "backend",
"image": "123456.dkr.ecr.eu-west-1.amazonaws.com/backend:latest",
"Update": "true",
"essential": true,
"memory": 512,
"mountPoints": [
{
"containerPath": "/app/backend",
"sourceVolume": "backend"
}
],
"portMappings": [
{
"containerPort": 4000,
"hostPort": 4000
}
],
"environment": [
{
"name": "PORT",
"value": "4000"
},
{
"name": "MIX_ENV",
"value": "dev"
},
{
"name": "PG_PASSWORD",
"value": "xxsaxaax"
},
{
"name": "PG_USERNAME",
"value": "
},
{
"name": "PG_HOST",
"value": "123456.dsadsau89das.eu-west-1.rds.amazonaws.com"
},
{
"name": "FE_URL",
"value": "http://develop1.com"
}
]
},
{
"name": "frontend",
"image": "123456.dkr.ecr.eu-west-1.amazonaws.com/frontend:latest",
"Update": "true",
"essential": true,
"memory": 512,
"links": [
"backend"
],
"command": [
"npm",
"run",
"production"
],
"mountPoints": [
{
"containerPath": "/app/frontend",
"sourceVolume": "frontend"
}
],
"portMappings": [
{
"containerPort": 3000,
"hostPort": 3000
}
],
"environment": [
{
"name": "REDIS_HOST",
"value": "www.eample.com"
}
]
},
{
"name": "nginx-proxy",
"image": "nginx",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 3000
}
],
"links": [
"backend",
"frontend"
],
"mountPoints": [
{
"sourceVolume": "nginx-proxy-content",
"containerPath": "/var/www/html"
},
{
"sourceVolume": "awseb-logs-nginx-proxy",
"containerPath": "/var/log/nginx"
},
{
"sourceVolume": "nginx-proxy-conf",
"containerPath": "/etc/nginx/conf.d",
"readOnly": true
},
{
"sourceVolume": "nginx-proxy-ssl",
"containerPath": "/etc/nginx/ssl",
"readOnly": true
}
]
}
],
"family": ""
}
It seems that you have a broken docker-registry auth config file. In your home, this file ~/.docker/config.json, should look something like:
{
"auths": {
"https://1234567890.dkr.ecr.us-east-1.amazonaws.com": {
"auth": "xxxxxx"
}
}
}
That is generated with the command docker login (related to aws ecr get-login)
Check that. I say this because you are entering in an exception here:
for registry, entry in six.iteritems(entries):
if not isinstance(entry, dict):
# (...)
if raise_on_error:
raise errors.InvalidConfigFile(
'Invalid configuration for registry {0}'.format(registry)
)
return {}
This is due to outdated dependencies in the current version of the awsebcli tool. They pinned version "docker-py (>=1.1.0,<=1.7.2)" which does not support the newer credential helper formats. The latest version of docker-py is the first one to properly support the latest credential helper format and until the AWS EB CLI developers update docker-py to use 2.4.0 (https://github.com/docker/docker-py/releases/tag/2.4.0) this will remain broken.
First is that it's not valid json, The PG_USERNAME field does not have the enclosing quote.
{
"name": "PG_USERNAME",
"value": "
},
Should be
{
"name": "PG_USERNAME",
"value": ""
},
Next thing to check is to see if your Beanstalk instance profile has access to the ecr registry.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/iam-instanceprofile.html
Specifies the Docker base image on an existing Docker repository from which you're building a Docker container. Specify the value of the Name key in the format / for images on Docker Hub, or // for other sites.
When you specify an image in the Dockerrun.aws.json file, each instance in your Elastic Beanstalk environment will run docker pull on that image and run it. Optionally include the Update key. The default value is "true" and instructs Elastic Beanstalk to check the repository, pull any updates to the image, and overwrite any cached images.
Do not specify the Image key in the Dockerrun.aws.json file when using a Dockerfile. .Elastic Beanstalk will always build and use the image described in the Dockerfile when one is present.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html
Test to make sure you can access your ecr outside of Elasticbeanstalk as well.
$ docker pull aws_account_id.dkr.ecr.us-west-2.amazonaws.com/amazonlinux:latest
latest: Pulling from amazonlinux
8e3fa21c4cc4: Pull complete
Digest: sha256:59895a93ba4345e238926c0f4f4a3969b1ec5aa0a291a182816a4630c62df769
Status: Downloaded newer image for aws_account_id.dkr.ecr.us-west-2.amazonaws.com/amazonlinux:latest
http://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-pull-ecr-image.html