I would like to create dynamic playlist with liquidesoap and icecast. I've just copied the tutorial from the liquidsoap website but unfortunately it does not work.
This is my code:
def get_next() =
result = list.hd(get_process_lines("/var/www/radiod/yii program-generator/next-track 1"))
# Create and return a request using this result
request.create(result)
end
# Create the source
s = request.dynamic(id="s", get_next)
# Output
source = output.icecast(%mp3, host="localhost", port=8000, mount="opera.mp3", password="asd123", s)
I get this error message when I run the check command:
Invalid value at line 9, char 20-37: That source is fallible.
So, the problem will be around this line:
s = request.dynamic(id="s", get_next)
Can you help me what could be the failure?
Thanking you in advance!
http://savonet.sourceforge.net/doc-svn/quick_start.html covers "That source is fallible." in detail. Might want to go through the whole page.
Related
I cannot find any example on how to attach files(pdf) that are within my root folder of the site in python (google app engine) send_mail function.
url_test = "https://mywebsite.com/pdf/test.pdf"
test_file = urlfetch.fetch(url_test)
if test_file.status_code == 200:
test_document = test_file.content
mail.send_mail(sender=EMAIL_SENDER,
to=['test#test.com'],
subject=subject,
body=theBody,
attachments=[("testing",test_document)])
Decided to try it with EmailMessage:
message = mail.EmailMessage( sender=EMAIL_SENDER,
subject=subject,body=theBody,to=['myemail#gmail.com'],attachments=
[(attachname, blob.archivoBlob)])
message.send()
The above blob attachment is successfully sending however attaching a file with relative path always says "invalid attachment"
new_file = open(os.path.dirname(__file__) +
'/../pages/pdf/test.PDF').read()
message = mail.EmailMessage( sender=EMAIL_SENDER,
subject=subject,body=theBody,to=['myemail#gmail.com'],attachments=
[('testing',new_file )])
message.send()
In debugging I have also tried to see if the file is being read by doing this:
logging.info(new_file)
It seems to be reading the file as it outputs some unicode characters
Please help why am I not able to attach a PDF while I can attach a blob
When calling the attachments, the File type has to be indicated on the file title, for example attachments= [('testing.pdf',new_file )]). View this link
Trying to get Carrierwave multiple file uploads working. I'm following the documentation on the homepage. When I try to upload a file or multiple, I get an no implicit conversion of nil to string
That error is coming from this method in the Carrierwave gem found in uploaders/cache.rb
def workfile_path(for_file=original_filename)
File.join(CarrierWave.tmp_path, #cache_id, version_name.to_s, for_file)
end
The issue is that the original_file is nil. I've tried to trace the issue but can't find where the issue is really beginning. One thing that is odd is that I am following some source code from this repo
https://github.com/bobintornado/sample-gallery-app-with-carrierwave
The sample app is working and you can do multiple uploads. The difference though is that when cache! method is called the new_file is an Array where in sample app that's working it's Http::UploadedFile
Here's the cache method
def cache!(new_file = sanitized_file)
new_file = CarrierWave::SanitizedFile.new(new_file)
return if new_file.empty?
raise CarrierWave::FormNotMultipart if new_file.is_path? && ensure_multipart_form
self.cache_id = CarrierWave.generate_cache_id unless cache_id
#filename = new_file.filename
self.original_filename = new_file.filename
begin
# first, create a workfile on which we perform processings
if move_to_cache
#file = new_file.move_to(File.expand_path(workfile_path, root), permissions, directory_permissions)
else
#file = new_file.copy_to(File.expand_path(workfile_path, root), permissions, directory_permissions)
end
with_callbacks(:cache, #file) do
#file = cache_storage.cache!(#file)
end
ensure
FileUtils.rm_rf(workfile_path(''))
end
end
Here are my initial params
"coach"=>{"name"=>"ben", "title"=>"ceo", "description"=>"head dude",
"photos"=>[
#<ActionDispatch::Http::UploadedFile:0x007fc9a5235c78 #tempfile=#<Tempfile:/var/folders/sb/t6rry5j928l3sy96nkhy9f840000gn/T/RackMultipart20160113-67635-avg8ef.jpg>, #original_filename="benn-1.jpg", #content_type="image/jpeg", #headers="Content-Disposition: form-data; name=\"coach[photos][]\"; filename=\"benn-1.jpg\"\r\nContent-Type: image/jpeg\r\n">,
#<ActionDispatch::Http::UploadedFile:0x007fc9a5235c50 #tempfile=#<Tempfile:/var/folders/sb/t6rry5j928l3sy96nkhy9f840000gn/T/RackMultipart20160113-67635-r8bdxp.jpg>, #original_filename="benn-2.jpg", #content_type="image/jpeg", #headers="Content-Disposition: form-data; name=\"coach[photos][]\"; filename=\"benn-2.jpg\"\r\nContent-Type: image/jpeg\r\n">
]}
Sorry this isn't terribly helpful, but I burned my current working branch and started from the beginning and now everything is working. Not sure what I did differently. If you're running into the same issue I recommend starting over and following this tutorial:
Multiple-Images-Uploading-With-CarrierWave-and-PostgreSQL-Array
I have a process that scans a tape library and looks for media that has expired, so they can be removed and reused before sending the tapes to an offsite vault. (We have some 7 day policies that never make it offsite.) This process takes around 20 minutes to run, so I didn't want it to run on-demand when loading/refreshing the page. Rather, I set up a django-cron job (I know I could have done this in Linux cron, but wanted the project to be as self-contained as possible) to run the scan, and creates a file in /tmp. I've verified that this works -- the file exists in /tmp from this morning's execution. The problem I'm having is that now I want to display a list of those expired (scratch) media on my web page, but the script is saying that it can't find the file. When the file was created, I use the absolute filename "/tmp/scratch.2015-11-13.out" (for example), but here's the error I get in the browser:
IOError at /
[Errno 2] No such file or directory: '/tmp/corpscratch.2015-11-13.out'
My assumption is that this is a "web root" issue, but I just can't figure it out. I tried copying the file to the /static/ and /media/ directories configured in django, and even in the django root directory, and the project root directory, but nothing seems to work. When it says it cant' find /tmp/file, where is it really looking?
def sample():
""" Just testing """
today = datetime.date.today() #format 2015-11-31
inputfile = "/tmp/corpscratch.%s.out" % str(today)
with open(inputfile) as fh: # This is the line reporting the error
lines = [line.strip('\n') for line in fh]
print(lines)
The print statement was used for testing in the shell (which works, I might add), but the browser gives an error.
And the file does exist:
$ ls /tmp/corpscratch.2015-11-13.out
/tmp/corpscratch.2015-11-13.out
Thanks.
Edit: was mistaken, doesn't work in python shell either. Was thinking of a previous issue.
Use this instead:
today = datetime.datetime.today().date()
inputfile = "/tmp/corpscratch.%s.out" % str(today)
Or:
today = datetime.datetime.today().strftime('%Y-%m-%d')
inputfile = "/tmp/corpscratch.%s.out" % today # No need to use str()
See the difference:
>>> str(datetime.datetime.today().date())
'2015-11-13'
>>> str(datetime.datetime.today())
'2015-11-13 15:56:19.578569'
I ended up finding this elsewhere:
today = datetime.date.today() #format 2015-11-31
inputfilename = "tmp/corpscratch.%s.out" % str(today)
inputfile = os.path.join(settings.PROJECT_ROOT, inputfilename)
With settings.py containing the following:
PROJECT_ROOT = os.path.abspath(os.path.dirname(__file__))
Completely resolved my issues.
I am trying to get ryu to run, especially the topology discovery.
Now I am running the demo application for that under ryu/topology/dumper.py, which is supposed to dump all topology events. I am in the ryu/topology direcory and run it using ryu-manager dumper.py. The version of ryu-manager is 2.23.2.
Shortly after starting it gives me this error:
/usr/local/lib/python2.7/dist-packages/ryu/topology/switches.py:478: UserWarning:
Datapath#ports is kept for compatibility with the previous openflow versions (< 1.3).
This not be updated by EventOFPPortStatus message. If you want to be updated,
you can use 'ryu.controller.dpset' or 'ryu.topology.switches'.
for port in dp.ports.values():
What's really weird to me is that it recommends to use ryu.topology.switches, but that error is triggered by line 478 of that very file!
The function in question is this:
class Switches(app_manager.RyuApp):
OFP_VERSIONS = [ofproto_v1_0.OFP_VERSION, ofproto_v1_2.OFP_VERSION,
ofproto_v1_3.OFP_VERSION, ofproto_v1_4.OFP_VERSION]
_EVENTS = [event.EventSwitchEnter, event.EventSwitchLeave,
event.EventPortAdd, event.EventPortDelete,
event.EventPortModify,
event.EventLinkAdd, event.EventLinkDelete]
DEFAULT_TTL = 120 # unused. ignored.
LLDP_PACKET_LEN = len(LLDPPacket.lldp_packet(0, 0, DONTCARE_STR, 0))
LLDP_SEND_GUARD = .05
LLDP_SEND_PERIOD_PER_PORT = .9
TIMEOUT_CHECK_PERIOD = 5.
LINK_TIMEOUT = TIMEOUT_CHECK_PERIOD * 2
LINK_LLDP_DROP = 5
#...
def _register(self, dp):
assert dp.id is not None
self.dps[dp.id] = dp
if dp.id not in self.port_state:
self.port_state[dp.id] = PortState()
for port in dp.ports.values(): # THIS LINE
self.port_state[dp.id].add(port.port_no, port)
Has anyone else encountered this problem before? How can I fix it?
I ran into the same issue (depending on your application, maybe it's not a problem, just a warning that you can ignore). Here is what I figured out after a find . -type f | xargs grep "ports is kept"
This warning is triggered in ryu.topology.switches, by a call to _get_ports() in class Datapath of file ryu/controller/controller.py.
class Datapath(ofproto_protocol.ProtocolDesc):
#......
def _get_ports(self):
if (self.ofproto_parser is not None and
self.ofproto_parser.ofproto.OFP_VERSION >= 0x04):
message = (
'Datapath#ports is kept for compatibility with the previous '
'openflow versions (< 1.3). '
'This not be updated by EventOFPPortStatus message. '
'If you want to be updated, you can use '
'\'ryu.controller.dpset\' or \'ryu.topology.switches\'.'
)
warnings.warn(message, stacklevel=2)
return self._ports
def _set_ports(self, ports):
self._ports = ports
# To show warning when Datapath#ports is read
ports = property(_get_ports, _set_ports)
My understanding is that if the warning is from ryu.topology.switches or ryu.controller.dpset, you can ignore it; because those two classes handle the event for you. But if you use Datapath directly, port status is not updated automatically. Anyone correct me if I'm wrong.
class Switches(app_manager.RyuApp):
#......
#set_ev_cls(ofp_event.EventOFPPortStatus, MAIN_DISPATCHER)
def port_status_handler(self, ev):
I have encountered that problem before but I just ignored it and so far every thing has been working as it was expected.
If you are trying to learn the topology I would recommend using ryu.topology.api. i.e.
from ryu.topology.api import get_switch, get_link
There is this tutorial. However there are some of the stuff missing.
Here is what I have so far: Controller.py
In the Controller.py the two functions get_switch(self, None) and get_link(self, None) would give you list of links and switches.
I have some code written in django/python. The principal is that the HTTP Response is a generator function. It spits the output of a subprocess on the browser window line by line. This works really well when I am using the django test server. When I use the real server it fails / basically it just beachballs when you press submit on the page before.
#condition(etag_func=None)
def pushviablah(request):
if 'hostname' in request.POST and request.POST['hostname']:
hostname = request.POST['hostname']
command = "blah.pl --host " + host + " --noturn"
return HttpResponse( stream_response_generator( hostname, command ), mimetype='text/html')
def stream_response_generator( hostname, command ):
proc = subprocess.Popen(command.split(), 0, None, subprocess.PIPE, subprocess.PIPE, subprocess.PIPE )
yield "<pre>"
var = 1
while (var == 1):
for line in proc.stdout.readline():
yield line
Anyone have any suggestions on how to get this working with on the real server? Or even how to debug why it is not working?
I discovered that the generator function is actually running but it has to complete before the httpresponse throws up a page onscreen. I don't want to have to wait for it to complete before the user sees output. I would like the user to see output as the subprocess progresses.
I'm wondering if this issue could be related to something in apache2 rather than django.
#evolution did you use gunicorn to deploy your app. If yes then you have created a service. I am having a similar kind of issue but with libreoffice. As much as I have researched I have found that PATH is overriding the command path present on your subprocess. I did not have a solution till now. If you bind your app with gunicorn in terminal then your code will also work.