I'm wanting to watch an upload folder for new files, and trigger a script when anything is uploaded. for this purpose I have installed watchman on my CentOS 7 box, and set it up to watch the upload folder. This works, but when a large file is being uploaded watchman will trigger one or more times before the upload is complete. Since my script will be moving the file, this can result in corrupted data and failed uploads. How can I filter out these "partial" triggers?
As an example, a test script I made that just dumped information to a file gave me this output during a single upload:
Wed Sep 18 08:39:20 AKDT 2019 - test.nc STDIN: [{"mode": 33188, "oclock": "c:1568822800:18913:1:743734", "exists": true, "new": true, "size": 5924978688, "name": "test.nc"}]
Wed Sep 18 08:39:22 AKDT 2019 - test.nc STDIN: [{"mode": 33188, "oclock": "c:1568822800:18913:1:747283", "exists": true, "new": false, "size": 6056411136, "name": "test.nc"}]
Wed Sep 18 08:39:22 AKDT 2019 - test.nc STDIN: [{"mode": 33188, "oclock": "c:1568822800:18913:1:747324", "exists": true, "new": false, "size": 6057754624, "name": "test.nc"}]
Wed Sep 18 08:39:24 AKDT 2019 - test.nc STDIN: [{"mode": 33188, "oclock": "c:1568822800:18913:1:752502", "exists": true, "new": false, "size": 6229433544, "name": "test.nc"}]
I was able to work around this issue by adjusting the "settle" parameter. Apparently the default of 20 ms is too low for network transfers, resulting in periods where the disk is idle as incoming data is buffered or something. By bumping this setting up to 500, watchman no longer triggers before the file transfer is complete.
Related
I'm attempting to run lighthouse in a Docker container for eventual deployment to AWS Lambda (using its new docker image lambda deployments). However I'm getting an error that I can't seem to understand.
The following is my dockerfile
FROM amazon/aws-lambda-nodejs:12
ENV AWS_LAMBDA_FUNCTION_MEMORY_SIZE=10240
RUN curl https://intoli.com/install-google-chrome.sh | bash
copy package.json .
RUN npm install
COPY app.js .
CMD ["app.handler"]
The following is my app.js
const chromeLauncher = require("chrome-launcher");
const lighthouse = require("lighthouse");
exports.handler = async (event) => {
const chrome = await chromeLauncher.launch({
logLevel: "info",
chromeFlags: [
"--headless",
"--no-sandbox",
"--disable-gpu",
"--disable-dev-shm-usage",
"--single-process",
],
});
const results = await lighthouse("https://example.com", {
port: chrome.port,
disableStorageReset: true,
onlyCategories: ["performance"],
logLevel: "info",
});
return {
statusCode: 200,
results,
};
};
and the following is the log output. It seems to be able to connect to lighthouse but then is hanging on the first command it sends/
START RequestId: c9d7a07b-a5e2-4d03-8bf5-d0b5d248e3e7 Version: $LATEST
Sun, 24 Jan 2021 16:00:56 GMT ChromeLauncher Waiting for browser.
Sun, 24 Jan 2021 16:00:56 GMT ChromeLauncher Waiting for browser...
Sun, 24 Jan 2021 16:00:57 GMT ChromeLauncher Waiting for browser.....
Sun, 24 Jan 2021 16:00:57 GMT ChromeLauncher Waiting for browser.....✓
Sun, 24 Jan 2021 16:00:58 GMT status Connecting to browser
Sun, 24 Jan 2021 16:00:58 GMT status Resetting state with about:blank
Sun, 24 Jan 2021 16:01:28 GMT status Disconnecting from browser...
2021-01-24T16:01:28.172Z c9d7a07b-a5e2-4d03-8bf5-d0b5d248e3e7 ERROR Invoke Error {"errorType":"LHError","errorMessage":"PROTOCOL_TIMEOUT","code":"PROTOCOL_TIMEOUT","name":"LHError","friendlyMessage":"Waiting for DevTools protocol response has exceeded the allotted time. (Method: Network.enable)","lhrRuntimeError":true,"protocolMethod":"Network.enable","stack":["LHError: PROTOCOL_TIMEOUT"," at Timeout._onTimeout (/var/task/node_modules/lighthouse/lighthouse-core/gather/driver.js:409:21)"," at listOnTimeout (internal/timers.js:554:17)"," at processTimers (internal/timers.js:497:7)"]}
END RequestId: c9d7a07b-a5e2-4d03-8bf5-d0b5d248e3e7
REPORT RequestId: c9d7a07b-a5e2-4d03-8bf5-d0b5d248e3e7 Init Duration: 3.53 ms Duration: 33422.31 ms Billed Duration: 33500 ms Memory Size: 2010240 MB Max Memory Used: 2010240 MB
To run on AWS Function Lambda you should use
const chromium = require("chrome-aws-lambda");
const chrome = await chromium.puppeteer.launch({
args: [
...chromium.args,
"--disable-dev-shm-usage",
"--remote-debugging-port=9222",
],
defaultViewport: chromium.defaultViewport,
executablePath: await chromium.executablePath,
headless: chromium.headless,
ignoreHTTPSErrors: true,
});
const options = {
logLevel: "info",
output: "html",
onlyCategories: ["performance"],
preset: "mobile",
port: 9222,
};
const runnerResult = await lighthouse(`${body.url}`, options);
But, I'm facing errors in the performance category, there is no way to make it works, All the others categories work fine.
Following is my Lambda handler which is expecting the users data from the queryStringParameters:-
export const lambdaHandler = async (event, context) => {
try {
const numberOfUsersRequested= (event && event.queryStringParameters.users) ? event.queryStringParameters.users : 10;
const users = await generateUsers(numberOfUsersRequested).then(data => data.users);
I'm using AWS SAM to develop my Lambda and I can test it very well using the event.json as an input event to this Lambda locally. Here is a chunk of event.json where I'm passing the queryStringParamters users like this:-
{
"body": "{\"message\": \"mock data\"}",
"resource": "/{proxy+}",
"path": "/path/to/resource",
"httpMethod": "POST",
"isBase64Encoded": false,
"queryStringParameters": {
"users": 200
},
Now, may I know how to pass the same QueryStringParameters from the AWS API Gateway console. Currently, I'm getting this 500 error on the AWS console for the API Gateway:-
{
"message": "Internal server error"
}
Mon Sep 28 01:24:15 UTC 2020 : Endpoint response headers: {Date=Mon, 28 Sep 2020 01:24:15 GMT, Content-Type=application/json, Content-Length=2, Connection=keep-alive, x-amzn-RequestId=0e1f110c-e80c-4ff1-870a-5cafd04167db, x-amzn-Remapped-Content-Length=0, X-Amz-Executed-Version=$LATEST, X-Amzn-Trace-Id=root=1-5f713b3d-4762f9b07ee8c1d7c6623574;sampled=0}
Mon Sep 28 01:24:15 UTC 2020 : Endpoint response body before transformations: {}
Mon Sep 28 01:24:15 UTC 2020 : Execution failed due to configuration error: Output mapping refers to an invalid method response: 200
Mon Sep 28 01:24:15 UTC 2020 : Method completed with status: 500
I have performed the following steps to mitigate the issue but not working. It looks like something is missing :-
1)Added the url query string parameters as users in the Method Request (refer screenshot)
In the Integration Request -> Mapping Templates, added the mapping as application/json:-
{
"users": "$input.params('users')"
}
And finally passing the query string as users=6.
In your case, the event should just be:
{
"users": "6"
}
You can add the following to the beginning of your handler to confirm:
console.log(JSON.stringify(event, null, 2));
Therefore, to get users value you should just use event.users, not event.queryStringParameters.users.
I'm trying to test my project angular 5 with jasmine and karma but it show me this error:
myProject\Front\src\app> ng test
Your global Angular CLI version (7.0.3) is greater than your local
version (1.7.4). The local Angular CLI version is used.
To disable this warning use "ng config -g cli.warnings.versionMismatch false".
30 10 2018 09:42:25.435:WARN [karma]: No captured browser, open http://localhost:9876/
30 10 2018 09:42:38.011:INFO [karma]: Karma v0.13.9 server started at http://localhost:9876/
30 10 2018 09:42:38.016:INFO [launcher]: Starting browser Chrome
30 10 2018 09:42:41.074:INFO [Chrome 69.0.3497 (Windows 10 0.0.0)]: Connected on socket RiQkM_n-0oc-xuYAAAAA with id 49828596
Chrome 69.0.3497 (Windows 10 0.0.0) ERROR: 'DEPRECATION:', 'Setting specFilter directly on Env is deprecated, please use the specFilter option in `configure`'
Chrome 69.0.3497 (Windows 10 0.0.0): Executed 0 of 0 ERROR (0.003 secs / 0 secs)
and this is my karma.conf.js :
// Karma configuration
// Generated on Mon Oct 29 2018 16:09:43 GMT+0000 (Maroc)
module.exports = function (config) {
config.set({
// base path that will be used to resolve all patterns (eg. files, exclude)
basePath: '',
// frameworks to use
// available frameworks: https://npmjs.org/browse/keyword/karma-adapter
frameworks: ['jasmine'],
//plugin needed
plugins: [
require('karma-jasmine'),
require('karma-chrome-launcher'),
require('karma-jasmine-html-reporter'),
require('karma-coverage-istanbul-reporter'),
],
// list of files / patterns to load in the browser
files: [
'src/app/*.spec.ts',
],
// list of files / patterns to exclude
exclude: [
],
// preprocess matching files before serving them to the browser
// available preprocessors: https://npmjs.org/browse/keyword/karma-preprocessor
preprocessors: {
},
// test results reporter to use
// possible values: 'dots', 'progress'
// available reporters: https://npmjs.org/browse/keyword/karma-reporter
reporters: ['progress'],
// web server port
port: 9876,
// enable / disable colors in the output (reporters and logs)
colors: true,
// level of logging
// possible values: config.LOG_DISABLE || config.LOG_ERROR || config.LOG_WARN || config.LOG_INFO || config.LOG_DEBUG
logLevel: config.LOG_INFO,
// enable / disable watching file and executing tests whenever any file changes
autoWatch: true,
// start these browsers
// available browser launchers: https://npmjs.org/browse/keyword/karma-launcher
browsers: ['Chrome'],
// Continuous Integration mode
// if true, Karma captures browsers, runs the tests and exits
singleRun: false,
// Concurrency level
// how many browser should be started simultaneous
concurrency: Infinity
})
}
and this the interface in Google Chrome:
I'm just a beginner 😄 and I'm not sure if those informations are enough. I tried a lot of solutions that I found but always the same problem. Can someone help me?
This is a result of an incompatibility with newer versions of Chrome (failing for me with version 70) and the newest version of jasmine (3.3.0). The short term workaround is to lock your jasmine-core version to 3.2.0.
I am using UBUNTU 14.04.
I have started to explore about querying HDFS using apache drill, installed it my local system and configured the Storage plugin to point remote HDFS. Below is the configuration setup:
{
"type": "file",
"enabled": true,
"connection": "hdfs://devlpmnt.mycrop.kom:8020",
"workspaces": {
"root": {
"location": "/",
"writable": false,
"defaultInputFormat": null
}
},
"formats": {
"json": {
"type": "json"
}
}
}
After creating a json file "rest.json", I passed the query:
select * from hdfs.`/tmp/rest.json` limit 1
I am getting following error:
org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: From line 1, column 15 to line 1, column 18: Table 'hdfs./tmp/rest.json' not found
I would appreciate if someone tries to help me figure out what is wrong.
Thanks in advance!!
I want to get an url to play file media from a dropbox I tested the shared link from the user account with cvlc but it won't work .
how can i generate an url to play media file.
Metadatas don't provide an url to play file it only provide :
https://api.dropbox.com/1/metadata/auto/
{
"size": "225.4KB",
"rev": "35e97029684fe",
"thumb_exists": false,
"bytes": 230783,
"modified": "Tue, 19 Jul 2011 21:55:38 +0000",
"client_mtime": "Mon, 18 Jul 2011 18:04:35 +0000",
"path": "/Getting_Started.pdf",
"is_dir": false,
"icon": "page_white_acrobat",
"root": "dropbox",
"mime_type": "application/pdf",
"revision": 220823
}
I think you're looking for /media, which returns a direct link to the file contents that works for 4 hours.