Here you can see there are two websites which stores some musics
http://songspkherodownload.com/Ankit%20Tiwari-SongspkHero.Com/
Other one is
http://madmusic.esy.es/files/msc/
They both stores song but when we are on first one, once the file is clicked it gets downloaded but on the other one it gets played in browser instead of downloading.
What's the issue with other one.
Related
I'm trying to download this adobe connect recorded class wtih youtube-dl but it shows me error. can help me?
http://webinar2.um.ac.ir/p235bgxwomm/?OWASP_CSRFTOKEN=02580f34ad3e972c95f8a52eee4700b15783db104789f9c87909d1dedcb8f6bf
error:
youtube-dl.exe http://webinar2.um.ac.ir/p235bgxwomm/?OWASP_CSRFTOKEN=02580f34ad3e972c95f8a52eee4700b15783db104789f9c87909d1dedcb8f6bf
[generic] ?OWASP_CSRFTOKEN=02580f34ad3e972c95f8a52eee4700b15783db104789f9c87909d1dedcb8f6bf: Requesting header
WARNING: Falling back on generic information extractor.
[generic] ?OWASP_CSRFTOKEN=02580f34ad3e972c95f8a52eee4700b15783db104789f9c87909d1dedcb8f6bf: Downloading webpage
[generic] ?OWASP_CSRFTOKEN=02580f34ad3e972c95f8a52eee4700b15783db104789f9c87909d1dedcb8f6bf: Extracting information
ERROR: Unsupported URL: http://webinar2.um.ac.ir/p235bgxwomm/?OWASP_CSRFTOKEN=02580f34ad3e972c95f8a52eee4700b15783db104789f9c87909d1dedcb8f6bf
I'm sure that it is a adobe connect record!
thanks!
I wanted to use youtube-dl for adobe connect recorded class too, but I was unsuccessful. There is a simple solution for downloading your recorded class; your adobe connect URL should look like this:
http://webinar2.um.ac.ir/abcdefghijk/
Add this to the end:
output/filename.zip?download=zip
So your URL should look like this:
http://webinar2.um.ac.ir/abcdefghijk/output/filename.zip?download=zip
Then, it will give you a zip file containing the resources of the recorded class. One of the disadvantages of this method is that you may need to add the audio to the video yourself. Sometimes there is more than one file for a whole class; for example, in my case, there was 3 file each containing about 20 minutes of a 60 minutes class.
I have two separated servers: one is CD server and one is CM Server. I upload images on CM server and publish them. On the web database, although I saw the the images under Media Library item
But they aren't displayed on CD server (e.g on website), it indicates that the images not found. Please help me to know how I can solve that problem or I do need some configuration for that.
Many thanks.
Sitecore media items can carry actual media file either as:
Blob in the database - everything works automatically OOB
Files on the file system - one needs to configure either WebDeploy, or DFS
Database resources are costly, you might not want to waste them on something that can be achieved by free tools.
Since WebDeploy by default locates modified files by comparing file hashes between source, and target, it will become slower after a while.
You might have uploaded image in media library as a file. As such, image is stored as a File on file system. To verify this, your image item in media library will have a path value set in 'File Path' field of your image item. Such files have to be moved to file system of CD server as well.
If you uploaded your images in bulk, you can store them as blob in DB by default rather than as a File in file system using following setting-
<setting name="Media.UploadAsFiles" value="false">
So in the Sitecore site, in Data/Submit Queue, there is a file without an extension that is representing the content of the Submit Queue.
If you try viewing it as a text file, it shows some content, but there is some strange characters in the mix.
So, has someone made an application to view this file? Is it suppose to be in a specific format that should opened with an application able to view that format?
Extra info: Sitecore 8.0, no there is nothing about it in the control panel or in sitecore/admin.
Mark is right, the submit queue isn't meant for users to view. A couple of months ago, I wrote a post on this exact subject.
https://citizensitecore.com/2016/07/01/xdb-session-info-and-mongodb-availability/
From Akinori Taira, a member of the xDB product team:
In the event that the collections database is unavailable, there is a
special ‘Submit Queue’ mechanism that flushes captured data to the
local hard drive (the ‘Data\Submit Queue’ folder by default). When
the collections database comes back online, a background worker
process submits the data from the ‘Submit Queue’ on disk.
No, you're not meant to be opening the Submit Queue and do anything with it.
It is used by xDB (in your case) to submit data, when the xDB cannot be reached. It will be a format related to MongoDB in some way, but I've never seen any formal documentation for it.
References:
http://sitecoreart.martinrayenglish.com/2015/04/sitecore-xdb-cloud-edition-what-you.html
Sitecore 8.1: Purpose of Submit Queue and MediaIndexing folders under $(dataFolder)
This file contains the analytics data that was not flushed to the Mongo database.
In case xDB collection server is unavailable, Sitecore would/must handle this situation correctly. There is a special 'Submit Queue' mechanism introduced that flushes captured data to local server hard drive ( 'Data\Submit Queue' folder by default ) in case xDB is not available.
When xDB is up again, a background worker would submit the data saved on disk, so no data is lost.
As a quick suggestion on this I recommend you to check whether your MongoDB server is available for your Sitecore instance. Once it becomes available, all data from the file should be flushed to the xDB.
The submit queue file stores serialized values as follows: first value - number of entities, second value - position of the next entity, which must be submitted to xDB, the next values contain serialized analytics data.
The submit queue is processed using this class: Sitecore.Analytics.Data.DataAccess.SubmitQueue.FileSubmitQueue
If you want to debug to see how is processed decompile the class and create your own class and replace in Sitecore.Analytics.Tracking.confing
<submitQueue>
<queue type="Sitecore.Analytics.Data.DataAccess.SubmitQueue.FileSubmitQueue, Sitecore.Analytics" singleInstance="true" />
</submitQueue>
I am tired of one problem so please make things clear to me.
Please read these following three points and help me out.
(1)
I have simply followed this https://www.wowza.com/docs/how-to-start-and-stop-live-stream-recordings-programmatically-livestreamrecordautorecord-example#documentation
I have attached my Application.xml. Now when I publish live stream name "test1" via FMLE it get recorded on server but when I run different instance of FMLE on different PC and publish live stream name "test2" it does not get record and I think it goes to previously recorded file "test1" (means no separate file being record, however there should be two files recorded test1 and test2).
Why this happenning ?
Is this com.wowza.wms.plugin.livestreamrecord.module.ModuleAutoRecordAdvancedExample for single stream recording ? means If I publish stream A B C D , it will record them in one single file ? (probably the output file will be A.mp4 as A was first published stream ?)
(2) What is this https://www.wowza.com/docs/how-to-start-and-stop-live-stream-recordings-programmatically-imediastreamactionnotify3#comments module for ?
I have implement this code in Eclipse and successfully put jar in lib folder and configured everything. Now again I am not able to record different streams with their corresponding name. Means If I publish stream1 and stream2 then desired output should be two different files (in content folder) but again I see one single file being record ?
(3) Can I use ModuleLiveStreamRecord.java ? This was in older version of WOWZA but I have properly imported required jar and tested it.
My requirement is very simple:
As soon as users start publishing, WOWZA should start live recording. If 10 users publishing live, 10 files should be generate.
Don't make things more difficult than necessary (assuming you have Wowza 4.x; if you still have 3.x then I highly recommend to upgrade for free)
Open the Engine Manager (http://your.server.com:8088)
Go to "Applications" from the top menu
Select your application from the left menu (e.g. "live")
In the setup window for this application, click the blue Edit button
Enable "Record all incoming streams"
Click "Save"
Click the orange "Restart now" button at the top
Done
Every stream that is published via this application will now automatically be recorded. The default folder for recordings is the /content folder in your Wowza installation. You can change this on the same page under "Streaming File Directory" (make sure it's a directory on your local system, unless you really well understand how Wowza works)
The filename is always the streamname + ".mp4", but when you start a new recording while the file already exists, the old file will be renamed first.
Want to control recording manually? Start publishing first, then select "Incoming streams" from the left menu and use the big red dot button behind a stream name to start recording.
If your server produces any different behavior with regards to the file (re)naming or recording, then you may need to review your Wowza setup.
I appreciate your response KBoek.
I sorted out issue but there were really debugging need if one doing custom module. I had to write custom module for live auto recording because I wanted HTTP authentication and then custom name of live recording.
thanks again
I'm having problems when uploading lots of files in Django. The context is the following: I've a spreadsheet with one or more columns being image filenames; those images are being uploaded through an form with input type=file and the option multiple.
With few lines - say 70, everything goes fine. But with more lines, and consequently more images, there's a IOError happening in random positions.
I've checked several questions about file/image upload in Django but couldn't find any that is related to my problem.
The model I'm using is the Product model of LFS (www.getlfs.com). We are developing a system that is based on LFS and to facilitate the creation of dozens of products in batch we wrote some views and templates to receive the main product properties through a spreadsheet. Each line is a product and the columns are the desired properties.
LFS uses a custom class ImageWithThumbsField(ImageField) to store the product's image and when saving the product instance (got from the spreadsheet), all thumbnails are generated. This is a time (cpu) consuming task, and my initial guess is that for some reason the temporary file is deleted before all processing had occurred.
Is there a way to keep these uploaded files for more time? Any other approach suggested to be able to process hundreds of uploaded files? Any hints on what can be happening?
Hope you can understand my question. I can post code if need.
Links to relevant portions of LFS code:
where thumbnails are generated:
https://github.com/diefenbach/django-lfs/blob/master/lfs/core/fields/thumbs.py
product model
https://github.com/diefenbach/django-lfs/blob/master/lfs/catalog/models.py
Thanks in advance!
It sounds like you are running out of memory. When django processess uploads, until the form is validated all of the files are either:
kept in memory inside the python/wsgi process/worker. (Usual mode of op for runserver)
In this case, you are uploading enough photos to fill up the process memory and running out of space. This will be non-deterministic as to where the IOError happens as you can imagine (GC Dependent).
Temporarily stored in /tmp/ (usual setup of apache)
In this case, the webserver's ramfs is full of images that have not yet been written to disk. In this case it should IOError arround the same place.
In either case, you should not be bulk uploading images in this way anyway. Apache/Django is not designed for it. Try uploading a single product/image per request/response, and all your problems will go away.