As I read the different posts here and libtorrent documentation, I know (as documented), I have to download the torrent file in order to get the metadata. but how the uTorrent App works, when I just start downloading, I get the metadata within a second then after getting the metadata, I can pause downloading. So, it doesn't restrict me to download a complete file in order to return metadata.
So, is there a way to get metadata without downloading the complete file
libtorrent's metadata_received_alert is what you want. This will be sent once the metadata finishes downloading. Do make sure that youre receiving status notifications though.
Related
I have a general understanding question. I am building a flutter app that relies on a content library containing text files, latex equations, images, pdfs, videos etc.
The content lies on an aws amplify backend. Depending on the navigation of the user in the app, the corresponding data is fetched and displayed.
I am not sure about the correct way of fetching the data. The current method (which works) is that the data is stored in an S3 bucket. When data is requested, the data is downloaded to a temporary directory and then opened and processed in the app. This is actually not slow, but I feel that it is not the way it should be done.
When data is downloaded a file transfer notification pops up, which bothers me because it is shown all the time. Also I would like to read the data directly with something like a get request, without downloading the file first (specially for text files, which I would like to read directly into a String). But here I don't know how it works, because I don't see that you can save data in a file system with the other amplify services like data store or the rest api. Also, the S3 bucket is an intuitive way of storing data that is easy to use for the content creators of my company, for me it seems that the S3 bucket is the way to go. However with S3 I have only figured out the download method to fetch data.
Could someone give me a hint on what is the correct approach for this use case? Thank you very much!
I have a 121MB MP3 file I am trying to upload to my AWS S3 so I can process it via Amazon Transcribe.
The MP3 file comes from an MP4 file I stripped the audio from using FFmpeg.
When I try to upload the MP3, using the S3 object upload UI in the AWS console, I receive the below error:
InvalidPart
One or more of the specified parts could not be found. the part may not have been uploaded, or the specified entity tag may not match the part's entity tag
The error makes reference to the MP3 being a multipart file and how the "next" part is missing but it's not a multipart file.
I have re-run the MP4 file through FFmpeg 3 times in case the 1st file was corrupt, but that has not fixed anything.
I have searched a lot on Stackoverflow and have not found a similar case where anyone has uploaded a single 5MB+ file that has received the error I am.
I've also crossed out FFmpeg being the issue by saving the audio using VLC as an MP3 file but receive the exact same error.
What is the issue?
Here's the console in case it helps:
121MB is below the 160 GB S3 console single object upload limit, the 5GB single object upload limit using the REST API / AWS SDKs as well as the 5TB limit on multipart file upload so I really can't see the issue.
Considering the file exists & you have a stable internet-connected (no corrupted uploads), you may have incomplete multipart upload parts in your bucket somehow which may be conflicting with the upload for whatever reason so either follow this guide to remove them and try again or try creating a new folder/bucket and re-uploading again.
You may also have a browser caching issue/extension conflict so try incognito (with extensions disabled) or another browser if re-uploading to another bucket/folder doesn't work.
Alternatively, try the AWS CLI s3 cp command or a quick "S3 file upload" application in a supported SDK language to make sure that it's not a console UI issue.
I have a mautic marketing automation installed on my server (I am a beginner)
However i replicated this issue when configuring GeoLite2-City IP lookup
Automatically fetching the IP lookup data failed. Download http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz, extract if necessary, and upload to /home/ol*****/public_html/mautic4/app/cache/prod/../ip_data/GeoLite2-City.mmdb.
What i attempted
i FTP into the /home/ol****/public_html/mautic4/app/cache/prod/../ip_data/GeoLite2-City.mmdb. directory
uploaded the file (the original GeoLite2-City.mmdb has '0 byte', while the newly added file is about '6000 kb'
However, once i go back into mautic to implement the lookup, the newly added file reverts back to '0byte" and i still cant get the IP lookup configured.
I have also changed the file permission to 0744, but the issue still replicates.
Did you disable the cron job which looks for the file? If not, or if you clicked the button again in the dashboard, it will overwrite the file you manually placed there.
As a side note, the 2.16 release addresses this issue, please take a look at https://www.mautic.org/blog/community/announcing-mautic-2-16/.
Please ensure you take a full backup (files and database) and where possible, run the update at command line to avoid browser timeouts :)
I am using postman for API calls as I am trying to download on batch thousands of files from a database.
https://website.com/services/rest/connect/v1.4/incidents/439006/fileAttachments/?download
This creates the call but I then have to click save response -> save to a file to download the attachment.
Is this something that's possible in postman?
My IT department is very strict with downloading development environments I only have postman and R Studio. I know you can use RCURL potentially but considering I don't know how to use CURL I don't know where to start.
What I want to do is download any attachment (if it exists) for a number of keys.
And in a loop call the file:
i = o
Start loop i in n#rows(file)
i = i+
key = key(i)
https://website.com/services/rest/connect/v1.4/incidents/{key}/fileAttachments/?download
Save to file (named key).
Loop
I can't get it to work, I want a file full of thousands of downloads each called the number of the key.
You can use below command in linux system.
curl -v "https://website.com/services/rest/connect/v1.4/incidents/{key}/fileAttachments/?download"
You can use below command in linux system.
curl -v "https://website.com/services/rest/connect/v1.4/incidents/{key}/fileAttachments/?download"
Also you can try to send the request from postman and try to save that response.
Let me know if that works.
I have a MQFTE setup where we are receiving files from an external vendor. The files get dumped on a server in DMZ and we have an MQFTE agent that picks the files from that server and drops to our server.
We receive files in "sets" i.e. each incoming file has an associated xml file that describes and contains metadata about the file. E.g. a applicationform.pdf and applicationform.xml. The final application stores the pdf file based on the data/metadata in the xml.
Since the trigger is fired for each incoming file, we check in the trigger whether or not we've received the XML file and the content file (e.g. PDF).
However, I don't think this is the best approach as it adds to a lot of booking code to check for concurrency issues when both files arrive at same time. Is there a way to :
Restrict the trigger so that it only fires when both files have arrived? In my research this is not possible.
Configure the agent on the server so that it only receives one file at a time? Looking at the documentation, it seems like it can achieved but only on the agent initiating the transfer, not on the agent receiving the transfer? The documentation hints at monitorMaxResourcesInPoll and -bs parameter, but that would be on the source agent I guess. Since the agent is shared with multiple systems, this would impact them as well.
Also, I would appreciate any tips and suggestions or even alternative solutions to best meet the requirement.
I don't think there is a way to check for both files existing before the monitor triggers. What some users do is send all of the files they want to transfer, and then finally put a 'marker' file in the directory which the resource monitor looks for. Because the marker file is only written after all other files are ready to be sent, the monitor only transfers the files when they're all there.
In answer to 2) I you could set maxDestinationTransfers to 1 on the destination agent to limit it to receive a single transfer at a time. If a transfer contains multiple files they will be transferred in sequence so the destination is really only receiving 1 file at a time. monitorMaxResourcesInPoll simply limits the monitoring agent to the number of files it parses in the source directory per monitor poll. You could set that to 1 but if you want to transfer the PDF and the XML file in the same transfer you'd need to set it to 2. It's probably not the setting you want to use.