Unable to load strapi instance in unit testing - unit-testing

I am trying to run test cases for strapi but it is not allowing me to do so. I am using postgres database.
I have checked other answers and I have only one database.js file and not database.json
The error I get-
lerna ERR! yarn run test:local stderr:
● process.exit called with "1"
7 | async function setupStrapi() {
8 | if (!instance) {
> 9 | instance = await Strapi().load();
| ^
10 | await instance.app
11 | .use(instance.router.routes()) // populate KOA routes
12 | .use(instance.router.allowedMethods()); // populate KOA methods
at Strapi.stop (../../node_modules/strapi/lib/Strapi.js:316:13)
at Strapi.stopWithError (../../node_modules/strapi/lib/Strapi.js:302:17)
at ../../node_modules/strapi-connector-bookshelf/lib/mount-models.js:688:18
at module.exports (../../node_modules/strapi-connector-bookshelf/lib/mount-models.js:708:7)
at mountConnection (../../node_modules/strapi-connector-bookshelf/lib/index.js:84:7)
at async Promise.all (index 0)
at Object.initialize (../../node_modules/strapi-database/lib/connector-registry.js:30:9)
at DatabaseManager.initialize (../../node_modules/strapi-database/lib/database-manager.js:43:5)
at Strapi.load (../../node_modules/strapi/lib/Strapi.js:362:5)
at setupStrapi (tests/helpers/setup.js:9:16)
error Command failed with exit code 1.
Any leads to fix this?

Related

Expo fastlane build fails due to issue with logging in EXUpdates / expo-updates

I get this error message when building an IOS app on an expo managed worfklow for SDK 46. The error appears at the end of the log, at line 716: 'Logger' initializer is inaccessible due to 'internal' protection level. The expo-updates package is installed with the latest version (0.15.6).
The expo doctor command shows nothing wrong with the project.
Compiling expo-eas-client Pods/EASClient » EASClient-dummy.m
› Packaging expo-eas-client Pods/EASClient » libEASClient.a
› Executing expo-eas-client Pods/EASClient » Copy generated compatibility header
› Executing expo-updates Pods/EXUpdates » [CP-User] Generate app.manifest for expo-updates
❌ (node_modules/expo-updates/ios/EXUpdates/Logging/UpdatesLogger.swift:15:24)
13 | public static let EXPO_UPDATES_LOG_CATEGORY = "expo-updates"
14 |
> 15 | private let logger = Logger(category: UpdatesLogger.EXPO_UPDATES_LOG_CATEGORY, options: [.logToOS, .logToFile])
| ^ 'Logger' initializer is inaccessible due to 'internal' protection level
16 |
17 | // MARK: - Public logging functions
18 |
❌ (node_modules/expo-updates/ios/EXUpdates/Logging/UpdatesLogger.swift:15:91)
13 | public static let EXPO_UPDATES_LOG_CATEGORY = "expo-updates"
14 |
> 15 | private let logger = Logger(category: UpdatesLogger.EXPO_UPDATES_LOG_CATEGORY, options: [.logToOS, .logToFile])
| ^ extra argument 'options' in call
16 |
17 | // MARK: - Public logging functions
18 |
❌ (node_modules/expo-updates/ios/EXUpdates/Logging/UpdatesLogger.swift:15:93)
13 | public static let EXPO_UPDATES_LOG_CATEGORY = "expo-updates"
14 |
> 15 | private let logger = Logger(category: UpdatesLogger.EXPO_UPDATES_LOG_CATEGORY, options: [.logToOS, .logToFile])
| ^ reference to member 'logToOS' cannot be resolved without a contextual type
16 |
17 | // MARK: - Public logging functions
18 |
❌ (node_modules/expo-updates/ios/EXUpdates/Logging/UpdatesLogger.swift:15:103)
13 | public static let EXPO_UPDATES_LOG_CATEGORY = "expo-updates"
14 |
> 15 | private let logger = Logger(category: UpdatesLogger.EXPO_UPDATES_LOG_CATEGORY, options: [.logToOS, .logToFile])
| ^ reference to member 'logToFile' cannot be resolved without a contextual type
16 |
17 | // MARK: - Public logging functions
18 |
❌ (node_modules/expo-updates/ios/EXUpdates/Logging/UpdatesLogReader.swift:15:32)
13 | public class UpdatesLogReader: NSObject {
14 | private let serialQueue = DispatchQueue(label: "dev.expo.updates.logging.reader")
> 15 | private let logPersistence = PersistentFileLog(category: UpdatesLogger.EXPO_UPDATES_LOG_CATEGORY)
| ^ cannot find 'PersistentFileLog' in scope
16 |
17 | /**
18 | Get expo-updates logs newer than the given date
▸ ** ARCHIVE FAILED **
▸ The following build commands failed:
▸ CompileSwift normal arm64 (in target 'EXUpdates' from project 'Pods')
▸ CompileSwiftSources normal arm64 com.apple.xcode.tools.swift.compiler (in target 'EXUpdates' from project 'Pods')
▸ (2 failures)
2023-01-06 08:55:22.238 xcodebuild[4410:14072] Requested but did not find extension point with identifier Xcode.IDEKit.ExtensionSentinelHostApplications for extension Xcode.DebuggerFoundation.AppExtensionHosts.watchOS of plug-in com.apple.dt.IDEWatchSupportCore
2023-01-06 08:55:22.238 xcodebuild[4410:14072] Requested but did not find extension point with identifier Xcode.IDEKit.ExtensionPointIdentifierToBundleIdentifier for extension Xcode.DebuggerFoundation.AppExtensionToBundleIdentifierMap.watchOS of plug-in com.apple.dt.IDEWatchSupportCore
2023-01-06 08:55:22.327 xcodebuild[4410:14072] XType: failed to connect - Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.fonts was invalidated: failed at lookup with error 3 - No such process." UserInfo={NSDebugDescription=The connection to service named com.apple.fonts was invalidated: failed at lookup with error 3 - No such process.}
2023-01-06 08:55:22.328 xcodebuild[4410:14072] Font server protocol version mismatch (expected:5 got:0), falling back to local fonts
2023-01-06 08:55:22.328 xcodebuild[4410:14072] XType: unable to make a connection to the font daemon!
2023-01-06 08:55:22.328 xcodebuild[4410:14072] XType: XTFontStaticRegistry is enabled as fontd is not available.
** ARCHIVE FAILED **
The following build commands failed:
CompileSwift normal arm64 (in target 'EXUpdates' from project 'Pods')
CompileSwiftSources normal arm64 com.apple.xcode.tools.swift.compiler (in target 'EXUpdates' from project 'Pods')
(2 failures)
Exit status: 65
+-------------+-------------------------+
| Build environment |
+-------------+-------------------------+
| xcode_path | /Applications/Xcode.app |
| gym_version | 2.206.1 |
| sdk | iPhoneOS15.5.sdk |
+-------------+-------------------------+
The issue was solved by upgrading to Expo SDK 47.

Facing redis problem: unknown command `config`, with args beginning with: `set` on amazon linux

I am trying to get around this problem since yesterday but no matter what I do, I just cannot make redis to execute for my project. Here are the details in steps:
Step 1: I created new setup for redis using Elasticache in aws console, then I copied the url mentioned in primary endpoint e.g. my-redisxxxxx.cache.amazonaws.com. Thereafter, I assigned custom parameter group to this cluster from aws console where I have also configured the "notify-keyspace-events" in following way:
It still breaks even if I select default parameter group and don't assign this key.
Step 2: I made ssh connection to my nodejs EC2 instance and went to the environment variables file and assigned this url in this way:
REDIS_URI="my-redisxxxxx.cache.amazonaws.com"
REDIS_HOST="my-redisxxxxx.cache.amazonaws.com"
REDIS_PORT=6379
REDIS_INDEX=14
Now all other configurations are working fine in this manner, even redis works if I replace above url with one another existing cluster url. Only this particular new cluster that I made using elasticache seems to be having some problem.
As soon as I run node project using pm2, it hits the wall and throws this:
subscribed to channel ===> __keyevent#14__:expired
0|npm | events.js:377
0|npm | throw er; // Unhandled 'error' event
0|npm | ^
0|npm | ReplyError: ERR unknown command `config`, with args beginning with: `set`, `notify-keyspace-events`, `AKE`,
0|npm | at parseError (/usr/share/nginx/project-directory/node_modules/redis-parser/lib/parser.js:193:12)
0|npm | at parseType (/usr/share/nginx/project-directory/node_modules/redis-parser/lib/parser.js:303:14)
0|npm | Emitted 'error' event on RedisClient instance at:
0|npm | at Object.callbackOrEmit [as callback_or_emit] (/usr/share/nginx/project-directory/node_modules/redis/lib/utils.js:91:14)
0|npm | at RedisClient.return_error (/usr/share/nginx/project-directory/node_modules/redis/index.js:706:11)
0|npm | at JavascriptRedisParser.returnError (/usr/share/nginx/project-directory/node_modules/redis/index.js:196:18)
0|npm | at JavascriptRedisParser.execute (/usr/share/nginx/project-directory/node_modules/redis-parser/lib/parser.js:572:12)
0|npm | at Socket.<anonymous> (/usr/share/nginx/project-directory/node_modules/redis/index.js:274:27)
0|npm | at Socket.emit (events.js:400:28)
0|npm | at Socket.emit (domain.js:475:12)
0|npm | at addChunk (internal/streams/readable.js:293:12)
0|npm | at readableAddChunk (internal/streams/readable.js:267:9)
0|npm | at Socket.Readable.push (internal/streams/readable.js:206:10)
0|npm | at TCP.onStreamRead (internal/stream_base_commons.js:188:23) {
0|npm | command: 'CONFIG',
0|npm | args: [ 'set', 'notify-keyspace-events', 'AKE' ],
0|npm | code: 'ERR'
0|npm | }
0|npm | npm
0|npm | ERR! code ELIFECYCLE
0|npm | npm
0|npm | ERR! errno 1
0|npm | npm ERR!
I spent countless hours to understand this problem by reading documentation, only to find out that config command is restricted in aws elasticache. Also I found this answer here however not able to adapt it in my current code as I am creating redis connection like this (also I am not sure whether this is good solution):
const sub = redis.createClient(options);
export const subscribe = async (channel: string) => {
try {
sub.subscribe(channel);
console.log(`subscribed to channel ===> ${channel}`);
return {};
}
catch (error) {
console.log("Error while subscribing to a channel", error);
return {}
}
}
I am completely lost now as not able to think over what can solve this.
For those who are having similar problem, I was able to solve this by commenting out this line: this.client.config('set', 'notify-keyspace-events', 'AKE'); from connectRedisDB() function located inside redis.database.ts. I have commented this line because this parameter is already set in parameter group for redis from aws console, as state above in image.

vue cli 3 - ProvidePlugin doesn't work (vue.config.js)

I am trying to add webpack.ProvidePlugin which isn't working on Vue-cli 3.
I also tried to set lodash as a global import (so I won't have to import it in each store module).
vue.config
const webpack = require("webpack");
module.exports = {
configureWebpack: {
plugins: [new webpack.ProvidePlugin({ _: "lodash" })]
}
};
build Error:
Module Warning (from ./node_modules/eslint-loader/index.js):
error: '_' is not defined (no-undef) at src/store/modules/templates.js:24:10:
22 | export default Object.assign({}, base, {
23 | namespaced: true,
> 24 | state: _.cloneDeep(initialState),
| ^
25 | mutations: {
26 | addTemplate(state, template) {
27 | if (!template) throw new Error("template is missing");
I built the project after adding the lines to vue.config and they gave me the aforementioned error.
The issue doesn't seem to be with Vue CLI but with eslint. See this question for a similar issue (just replace d3 with _): Webpack not including ProvidePlugins
In short, adding this to your eslint config (often found in .eslintrc.js) should make it work:
"globals": {
"_": true
}

NoHttpResponseException on uploading file to S3 (camel-aws)

I am trying to upload around 10 GB file from my local machine to S3 (inside a camel route). Although file gets uploaded in around 3-4 minutes, but it also throwing following exception:
2014-06-26 13:53:33,417 | INFO | ads.com/outbound | FetchRoute | 167 - com.ut.ias - 2.0.3 | Download complete to local. Pushing file to S3
2014-06-26 13:54:19,465 | INFO | manager-worker-6 | AmazonHttpClient | 144 - org.apache.servicemix.bundles.aws-java-sdk - 1.5.1.1 | Unable to execute HTTP request: The target server failed to respond
org.apache.http.NoHttpResponseException: The target server failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:95)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:62)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:254)[141:org.apache.httpcomponents.httpcore:4.2.4]
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:289)[141:org.apache.httpcomponents.httpcore:4.2.4]
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:252)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:191)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:300)[141:org.apache.httpcomponents.httpcore:4.2.4]
.......
at java.util.concurrent.FutureTask.run(FutureTask.java:262)[:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_55]
at java.lang.Thread.run(Thread.java:744)[:1.7.0_55]
2014-06-26 13:55:08,991 | INFO | ads.com/outbound | FetchRoute | 167 - com.ut.ias - 2.0.3 | Upload complete.
Due to which camel route doesn't stop and it is continuously throwing InterruptedException:
2014-06-26 13:55:11,182 | INFO | ads.com/outbound | SftpOperations | 110 - org.apache.camel.camel-ftp - 2.12.1 | JSCH -> Disconnecting from cxportal.integralads.com port 22
2014-06-26 13:55:11,183 | INFO | lads.com session | SftpOperations | 110 - org.apache.camel.camel-ftp - 2.12.1 | JSCH -> Caught an exception, leaving main loop due to Socket closed
2014-06-26 13:55:11,183 | WARN | lads.com session | eventadmin | 139 - org.apache.felix.eventadmin - 1.3.2 | EventAdmin: Exception: java.lang.InterruptedException
java.lang.InterruptedException
at EDU.oswego.cs.dl.util.concurrent.LinkedQueue.offer(Unknown Source)[139:org.apache.felix.eventadmin:1.3.2]
at EDU.oswego.cs.dl.util.concurrent.PooledExecutor.execute(Unknown Source)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.tasks.DefaultThreadPool.executeTask(DefaultThreadPool.java:101)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.tasks.AsyncDeliverTasks.execute(AsyncDeliverTasks.java:105)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.handler.EventAdminImpl.postEvent(EventAdminImpl.java:100)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.adapter.LogEventAdapter$1.logged(LogEventAdapter.java:281)[139:org.apache.felix.eventadmin:1.3.2]
at org.ops4j.pax.logging.service.internal.LogReaderServiceImpl.fire(LogReaderServiceImpl.java:134)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.LogReaderServiceImpl.fireEvent(LogReaderServiceImpl.java:126)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.PaxLoggingServiceImpl.handleEvents(PaxLoggingServiceImpl.java:180)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.PaxLoggerImpl.inform(PaxLoggerImpl.java:145)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.internal.TrackingLogger.inform(TrackingLogger.java:86)[18:org.ops4j.pax.logging.pax-logging-api:1.7.1]
at org.ops4j.pax.logging.slf4j.Slf4jLogger.info(Slf4jLogger.java:476)[18:org.ops4j.pax.logging.pax-logging-api:1.7.1]
at org.apache.camel.component.file.remote.SftpOperations$JSchLogger.log(SftpOperations.java:359)[110:org.apache.camel.camel-ftp:2.12.1]
at com.jcraft.jsch.Session.run(Session.java:1621)[109:org.apache.servicemix.bundles.jsch:0.1.49.1]
at java.lang.Thread.run(Thread.java:744)[:1.7.0_55]
Please see my code below and let me know, where I am going wrong:
TransferManager tm = new TransferManager(
S3Client.getS3Client());
// TransferManager processes all transfers asynchronously,
// so this call will return immediately.
Upload upload = tm.upload(
Utils.getProperty(Constants.BUCKET),
getS3Key(file.getName()), file);
try {
upload.waitForCompletion();
logger.info("Upload complete.");
} catch (AmazonClientException amazonClientException) {
logger.warn("Unable to upload file, upload was aborted.");
amazonClientException.printStackTrace();
}
The stacktrace doesn't even have any reference to my code, hence couldn't determine where the issue is.
Any help or pointer would be really appreciated.
Thanks

Grails possible race condition in database session?

I'm learning grails and read Grails In Action book. Try perform some tests from it, but got strange behaviour for me. I have next simple integration test:
#Test
public void testProjections() throws Exception {
User user1 = new User(mail: 'test1#test.tld', password: 'password1').save(flush: true)
User user2 = new User(mail: 'test2#test.tld', password: 'password2').save(flush: true)
assertNotNull(user1)
assertNotNull(user2)
// Chain add Tag to Post
user1.addToPosts(new Post(content: 'First').addToTags(new Tag(name: 'tag-0')))
// Separate add tag to post
Post post = user1.posts.iterator().next()
Tag tag1 = new Tag(name: 'tag-1')
post.addToTags(tag1)
// http://stackoverflow.com/questions/6288991/do-i-ever-need-to-explicitly-flush-gorm-save-calls-in-grails
// Have tried with and without next line without success:
//sessionFactory.getCurrentSession().flush()
assertEquals(['tag-0', 'tag-1'], user1.posts.iterator().next().tags*.name.sort()) // line 154
…
}
Then I run it twice subsequently:
grails>
grails> test-app -rerun -integration
| Running 5 integration tests... 2 of 5
| Failure: testProjections(com.tariffus.QueryIntegrationTests)
| java.lang.AssertionError: expected:<[tag-0, tag-1]> but was:<[tag-1]>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at com.tariffus.QueryIntegrationTests.testProjections(QueryIntegrationTests.groovy:154)
| Completed 5 integration tests, 1 failed in 0m 0s
| Tests FAILED - view reports in /home/pasha/Projects/grails/com.tariffus/target/test-reports
grails>
grails> test-app -rerun -integration
| Running 5 integration tests... 2 of 5
| Failure: testProjections(com.tariffus.QueryIntegrationTests)
| java.lang.AssertionError: expected:<[3, 1, 2]> but was:<[[tag-1, tag-2, tag-0, tag-5, tag-3, tag-4], [tag-6]]>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at com.tariffus.QueryIntegrationTests.testProjections(QueryIntegrationTests.groovy:164)
| Completed 5 integration tests, 1 failed in 0m 0s
| Tests FAILED - view reports in /home/pasha/Projects/grails/com.tariffus/target/test-reports
grails>
As you can see first fails on line 157 and second, runned just after that in second without any modification goes further.
I use Postgres database and environment test configured dataSource in mode dbCreate = 'update'.
What I do incorrect and why it works sometimes?
I would say that a source of problem is this line:
user1.addToPosts(new Post(content: 'First').addToTags(new Tag(name: 'tag-0')))
These dynamic addTo* methods does not propagate save to the associated instances until save() is called on the parent instance. So calling save() on user1 after should fix it:
user1.addToPosts(new Post(content: 'First').addToTags(new Tag(name: 'tag-0')))
user1.save()
This should propagate save() to Post instance at first and then to Tag instance transitively.