I want to get an url to play file media from a dropbox I tested the shared link from the user account with cvlc but it won't work .
how can i generate an url to play media file.
Metadatas don't provide an url to play file it only provide :
https://api.dropbox.com/1/metadata/auto/
{
"size": "225.4KB",
"rev": "35e97029684fe",
"thumb_exists": false,
"bytes": 230783,
"modified": "Tue, 19 Jul 2011 21:55:38 +0000",
"client_mtime": "Mon, 18 Jul 2011 18:04:35 +0000",
"path": "/Getting_Started.pdf",
"is_dir": false,
"icon": "page_white_acrobat",
"root": "dropbox",
"mime_type": "application/pdf",
"revision": 220823
}
I think you're looking for /media, which returns a direct link to the file contents that works for 4 hours.
Related
Following is my Lambda handler which is expecting the users data from the queryStringParameters:-
export const lambdaHandler = async (event, context) => {
try {
const numberOfUsersRequested= (event && event.queryStringParameters.users) ? event.queryStringParameters.users : 10;
const users = await generateUsers(numberOfUsersRequested).then(data => data.users);
I'm using AWS SAM to develop my Lambda and I can test it very well using the event.json as an input event to this Lambda locally. Here is a chunk of event.json where I'm passing the queryStringParamters users like this:-
{
"body": "{\"message\": \"mock data\"}",
"resource": "/{proxy+}",
"path": "/path/to/resource",
"httpMethod": "POST",
"isBase64Encoded": false,
"queryStringParameters": {
"users": 200
},
Now, may I know how to pass the same QueryStringParameters from the AWS API Gateway console. Currently, I'm getting this 500 error on the AWS console for the API Gateway:-
{
"message": "Internal server error"
}
Mon Sep 28 01:24:15 UTC 2020 : Endpoint response headers: {Date=Mon, 28 Sep 2020 01:24:15 GMT, Content-Type=application/json, Content-Length=2, Connection=keep-alive, x-amzn-RequestId=0e1f110c-e80c-4ff1-870a-5cafd04167db, x-amzn-Remapped-Content-Length=0, X-Amz-Executed-Version=$LATEST, X-Amzn-Trace-Id=root=1-5f713b3d-4762f9b07ee8c1d7c6623574;sampled=0}
Mon Sep 28 01:24:15 UTC 2020 : Endpoint response body before transformations: {}
Mon Sep 28 01:24:15 UTC 2020 : Execution failed due to configuration error: Output mapping refers to an invalid method response: 200
Mon Sep 28 01:24:15 UTC 2020 : Method completed with status: 500
I have performed the following steps to mitigate the issue but not working. It looks like something is missing :-
1)Added the url query string parameters as users in the Method Request (refer screenshot)
In the Integration Request -> Mapping Templates, added the mapping as application/json:-
{
"users": "$input.params('users')"
}
And finally passing the query string as users=6.
In your case, the event should just be:
{
"users": "6"
}
You can add the following to the beginning of your handler to confirm:
console.log(JSON.stringify(event, null, 2));
Therefore, to get users value you should just use event.users, not event.queryStringParameters.users.
I'm wanting to watch an upload folder for new files, and trigger a script when anything is uploaded. for this purpose I have installed watchman on my CentOS 7 box, and set it up to watch the upload folder. This works, but when a large file is being uploaded watchman will trigger one or more times before the upload is complete. Since my script will be moving the file, this can result in corrupted data and failed uploads. How can I filter out these "partial" triggers?
As an example, a test script I made that just dumped information to a file gave me this output during a single upload:
Wed Sep 18 08:39:20 AKDT 2019 - test.nc STDIN: [{"mode": 33188, "oclock": "c:1568822800:18913:1:743734", "exists": true, "new": true, "size": 5924978688, "name": "test.nc"}]
Wed Sep 18 08:39:22 AKDT 2019 - test.nc STDIN: [{"mode": 33188, "oclock": "c:1568822800:18913:1:747283", "exists": true, "new": false, "size": 6056411136, "name": "test.nc"}]
Wed Sep 18 08:39:22 AKDT 2019 - test.nc STDIN: [{"mode": 33188, "oclock": "c:1568822800:18913:1:747324", "exists": true, "new": false, "size": 6057754624, "name": "test.nc"}]
Wed Sep 18 08:39:24 AKDT 2019 - test.nc STDIN: [{"mode": 33188, "oclock": "c:1568822800:18913:1:752502", "exists": true, "new": false, "size": 6229433544, "name": "test.nc"}]
I was able to work around this issue by adjusting the "settle" parameter. Apparently the default of 20 ms is too low for network transfers, resulting in periods where the disk is idle as incoming data is buffered or something. By bumping this setting up to 500, watchman no longer triggers before the file transfer is complete.
Using boto3, I am trying to retrieve a Microsoft Word document stored in S3. However, when I try to access the object calling client.get_object() the content-length of Word Document is 0 while files with .txt extensions return the correct content-length. Is there a way to decode the Word Document in order to write its output to a stream?
I have tested this with .txt files and .docs files and I have also tried using the .decode() method after reading the file, but based on the content being returned, there doesn't seem to be anything to decode.
Accessing a .txt Document I notice that the content-length is 17 (the number of characters in the file) and they can be read by calling txt_file.read()
s3 = boto3.client('s3')
txt_file = s3.get_object(Bucket="test_bucket", Key="test.txt").get()
>>> txt_file
{
u'Body': <botocore.response.StreamingBody object at 0x7fc5f0074f10>,
u'AcceptRanges': 'bytes',
u'ContentType': 'text/plain',
'ResponseMetadata': {
'HTTPStatusCode': 200,
'RetryAttempts': 0,
'HTTPHeaders': {
'content-length': '17',
'accept-ranges': 'bytes',
'server': 'AmazonS3',
'last-modified': 'Sat, 06 Jul 2019 02:13:45 GMT',
'date': 'Sat, 06 Jul 2019 15:58:21 GMT',
'x-amz-server-side-encryption': 'AES256',
'content-type': 'text/plain'
}
}
}
Accessing a .docx Document I notice that the content-length is 0 (while the document has the same string written to the .txt file) and calling txt_file.read() outputs the empty string u''
s3 = boto3.client('s3')
word_file = s3.get_object(Bucket="test_bucket", Key="test.docx").get()
>>> word_file
{
u'Body': <botocore.response.StreamingBody object at 0x7fc5f0074f10>,
u'AcceptRanges': 'bytes',
u'ContentType': 'binary/octet-stream',
'ResponseMetadata': {
'HTTPStatusCode': 200,
'RetryAttempts': 0,
'HTTPHeaders': {
'content-length': '0',
'accept-ranges': 'bytes',
'server': 'AmazonS3',
'last-modified': 'Thu, 04 Jul 2019 21:51:53 GMT',
'date': 'Sat, 06 Jul 2019 15:58:30 GMT',
'x-amz-server-side-encryption': 'AES256',
'content-type': 'binary/octet-stream'
}
}
}
I expect the content-length of both files to output the number of bytes in the file, however, only the .txt file is returning data.
I am trying to edit file in EC2 remotely, I spend a while to setup the config.json but I still got timeout error.
I am using mac and I already chmod 400 to .pem file
{
"type": "sftp",
"sync_down_on_open": true,
"host": "xxx.xx.xx.xxx",
"user": "ubuntu",
"remote_path": "/home/ubuntu/",
"connect_timeout": 30,
"sftp_flags": ["-o IdentityFile=/Users/kevinzhang/Desktop/zhang435_ec2.pem"],
}
I figure it out, Just in case anyone also have the same problem
I am use MAC OS
installed ubuntu
the config file is have is looks like
{
// The tab key will cycle through the settings when first created
// Visit http://wbond.net/sublime_packages/sftp/settings for help
// sftp, ftp or ftps
"type": "sftp",
// "save_before_upload": true,
"upload_on_save": true,
"sync_down_on_open": true,
"sync_skip_deletes": false,
"sync_same_age": true,
"confirm_downloads": false,
"confirm_sync": true,
"confirm_overwrite_newer": false,
"host": "xxxx.compute.amazonaws.com",
"user": "ubuntu",
//"password": "password",
"port": "22",
"remote_path": "/home/ubuntu/",
"ignore_regexes": [
"\\.sublime-(project|workspace)", "sftp-config(-alt\\d?)?\\.json",
"sftp-settings\\.json", "/venv/", "\\.svn/", "\\.hg/", "\\.git/",
"\\.bzr", "_darcs", "CVS", "\\.DS_Store", "Thumbs\\.db", "desktop\\.ini"
],
//"file_permissions": "664",
//"dir_permissions": "775",
//"extra_list_connections": 0,
"connect_timeout": 30,
//"keepalive": 120,
//"ftp_passive_mode": true,
//"ftp_obey_passive_host": false,
"ssh_key_file": "~/.ssh/id_rsa",
"sftp_flags": ["-o IdentityFile=<YOUR.PEM FILE path>"],
//"preserve_modification_times": false,
//"remote_time_offset_in_hours": 0,
//"remote_encoding": "utf-8",
//"remote_locale": "C",
//"allow_config_upload": false,
}
If you have permission problem :
chmod -R 0777 /home/ubuntu/YOURFILE/
this just enable read and write for all user
You may want to create a new user if above not working for you:
https://habd.as/sftp-to-ubuntu-server-sublime-text/
I do not know if this makes different , But looks like it start working for me for both user once Icreate a new user
I am using UBUNTU 14.04.
I have started to explore about querying HDFS using apache drill, installed it my local system and configured the Storage plugin to point remote HDFS. Below is the configuration setup:
{
"type": "file",
"enabled": true,
"connection": "hdfs://devlpmnt.mycrop.kom:8020",
"workspaces": {
"root": {
"location": "/",
"writable": false,
"defaultInputFormat": null
}
},
"formats": {
"json": {
"type": "json"
}
}
}
After creating a json file "rest.json", I passed the query:
select * from hdfs.`/tmp/rest.json` limit 1
I am getting following error:
org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: From line 1, column 15 to line 1, column 18: Table 'hdfs./tmp/rest.json' not found
I would appreciate if someone tries to help me figure out what is wrong.
Thanks in advance!!