Rails 4 wicked_pdf generate blank pdf while generating from model - ruby-on-rails-4

I'm trying to save pdf on server using rails model buts its generate a blank pdf. Earlier did it in controller without problem but now its creating a blank one. Any idea What's i did wrong?
def generate_bulk_pdf
view = ActionView::Base.new(ActionController::Base.view_paths, {})
view.extend(ApplicationHelper)
view.extend(AbstractController::Rendering)
view.extend(Rails.application.routes.url_helpers)
students = Student.all.order('id ASC')
students.each do | aStudent |
pdf = WickedPdf.new.pdf_from_string(
view.render_to_string(
:template => "#{Rails.root.join('templates/challen.pdf.erb')}",
:locals => { '#student' => aStudent }
)
)
save_path = Rails.root.join('pdfs','filename.pdf')
File.open(save_path, 'wb') do |file|
file << pdf
end
end
end
Any idea What's i did wrong? I can't find any solution

A good test is just put a simple line of text in your template and see if you get a PDF with that line. Strip everything back so you just generating a PDF with no coming locals, just that 1 string and let me know.
Here is how I set up mine and it works fine, it might click something :)
def generate_pdf_voucher(voucher, dir_name)
view = ActionView::Base.new(Rails.root.join('app/views'))
view.class.include ApplicationHelper
view.class.include Rails.application.routes.url_helpers
pdf = view.render :pdf => a_name,
:template => 'layouts/pdfs/voucher_pdf',
:layout => 'layouts/pdfs/pdf.html.erb',
:header => {:right => '[page] of [topage]'},
:locals => {:#voucher => voucher}
# then save to a file
pdf = WickedPdf.new.pdf_from_string(pdf)
save_path = Rails.root.join('public', 'pdfs', dir_name, "#{voucher[:user].id.to_s}.pdf")
File.open(save_path, 'wb') do |file|
file << pdf
end
end
pdf.html.erb is the structure of the PDF
voucher_pdf is all the dynamic stuff
If this wasn't helpful, then put a comment on and I will delete it.

Related

Django acting weird on Windows server IIS deployment

I have the following view which allows me to save the information of a multi step application.
def saveNewApplication(request, *args, **kwargs):
educationList = [ val for val in pickle.loads(bytes.fromhex(request.session['education'])).values() ]
basicInfoDict = pickle.loads(bytes.fromhex(request.session['basic_info']))
documentsDict = pickle.loads(bytes.fromhex(request.session['documents']))
applicant, created = ApplicantInfo.objects.update_or_create(
applicantId=request.session['applicantId'],
defaults={**basicInfoDict}
)
if created:
#saving the diplomas
for education in educationList:
Education.objects.create(applicant=applicant, **education)
with open(f"{documentsDict['cv_url']}/{request.session['file_name']}", 'rb') as f:
Documents.objects.create(
applicant=applicant,
cv =File(f, name=os.path.basename(f.name)),
langue_de_travail = documentsDict['langue_de_travail']
)
#remove the temporary folder
shutil.rmtree(f"{documentsDict['cv_url']}")
else:
educationFilter = Education.objects.filter(applicant=applicant.id)
for idx, edu in enumerate(educationFilter):
Education.objects.filter(pk=edu.pk).update(**educationList[idx])
#updating the documents
document = get_object_or_404(Documents, applicant=applicant.id)
if documentsDict['cv_url']:
with open(f"{documentsDict['cv_url']}/{request.session['file_name']}", 'rb') as f:
document.cv = File(f, name=os.path.basename(f.name))
document.save()
document.langue_de_travail = documentsDict['langue_de_travail']
document.save()
languagesDict = pickle.loads(bytes.fromhex(request.session['languages']))
Languages.objects.update_or_create(applicant=applicant, defaults={**languagesDict})
if 'experiences' in request.session and request.session['experiences']:
experiencesList = [ pickle.loads(bytes.fromhex(val)) for val in request.session['experiences'].values() ]
Experience.objects.filter(applicant=applicant.id).delete()
for experience in experiencesList:
Experience.objects.create(applicant=applicant, **experience)
return JsonResponse({'success': True})
In the development it works perfectly but if deployed I am getting a 404 raise by this line get_object_or_404(Documents, applicant=applicant.id) meaning the creating is false. and I can't figure why is that.
The weirdest thing is if I do comment the entire else block it also returns a 500 error but this time it I do click in the link of the console it show the right response not redirected {success:true}
down below is my javascript fonction for handling the view.
applyBtn.addEventListener("click", () => {
var finalUrl = "/api/applications/save-application/";
fetch(finalUrl)
.then(res => res.json())
.then(data => {
if (data.success) {
window.location.href = '/management/dashboard/';
} else {
alert("something went wrong, Please try later");
}
})
});
I am using postgresql as a database I deleted twice but nothing.
the url file is here.
path("api/applications/save-application/", views.saveNewApplication, name="save-new-application"),
path("api/applications/delete-applicant/<slug:applicantId>/", views.deleteApplicant , name="delete-applicant"),
path('api/edit-personal-info/', editPersonalInfo, name="edit-personal-info"),
Any help or explanation would be highly appreciated. Thanks in advance.

Rails 4 and paperclip - Stop the :original style file upload to copy it from an S3 remote directory

I use Paperclip 4.0.2 and in my app to upload pictures.
So my Document model has an attached_file called attachment
The attachment has few styles, say :medium, :thumb, :facebook
In my model, I stop the styles processing, and I extracted it inside a background job.
class Document < ActiveRecord::Base
# stop paperclip styles generation
before_post_process
false
end
But the :original style file is still uploaded!
I would like to know if it's possible to stop this behavior and copy the file inside the :original/filename.jpg from a remote directory
My goal being to use a file that has been uploaded in a S3 /temp/ directory with jQuery File upload, and copy it to the directory where Paperclip needs it to generate the others styles.
Thank you in advance for your help!
New Answer:
paperclip attachments get uploaded in the flush_writes method which, for your purposes, is part of the Paperclip::Storage::S3 module. The line which is responsible for the uploading is:
s3_object(style).write(file, write_options)
So, by means of monkey_patch, you can change this to something like:
s3_object(style).write(file, write_options) unless style.to_s == "original" and #queued_for_write[:your_processed_style].present?
EDIT: this would be accomplished by creating the following file: config/initializers/decorators/paperclip.rb
Paperclip::Storage::S3.class_eval do
def flush_writes #:nodoc:
#queued_for_write.each do |style, file|
retries = 0
begin
log("saving #{path(style)}")
acl = #s3_permissions[style] || #s3_permissions[:default]
acl = acl.call(self, style) if acl.respond_to?(:call)
write_options = {
:content_type => file.content_type,
:acl => acl
}
# add storage class for this style if defined
storage_class = s3_storage_class(style)
write_options.merge!(:storage_class => storage_class) if storage_class
if #s3_server_side_encryption
write_options[:server_side_encryption] = #s3_server_side_encryption
end
style_specific_options = styles[style]
if style_specific_options
merge_s3_headers( style_specific_options[:s3_headers], #s3_headers, #s3_metadata) if style_specific_options[:s3_headers]
#s3_metadata.merge!(style_specific_options[:s3_metadata]) if style_specific_options[:s3_metadata]
end
write_options[:metadata] = #s3_metadata unless #s3_metadata.empty?
write_options.merge!(#s3_headers)
s3_object(style).write(file, write_options) unless style.to_s == "original" and #queued_for_write[:your_processed_style].present?
rescue AWS::S3::Errors::NoSuchBucket
create_bucket
retry
rescue AWS::S3::Errors::SlowDown
retries += 1
if retries <= 5
sleep((2 ** retries) * 0.5)
retry
else
raise
end
ensure
file.rewind
end
end
after_flush_writes # allows attachment to clean up temp files
#queued_for_write = {}
end
end
now the original does not get uploaded. You could then add some lines, like those of my origninal answer below, to your model if you wish to transfer the original to its appropriate final location if it was uploaded to s3 directly.
Original Answer:
perhaps something like this placed in your model executed with the after_create callback:
paperclip_file_path = "relative/final/destination/file.jpg"
s3.buckets[BUCKET_NAME].objects[paperclip_file_path].copy_from(relative/temp/location/file.jpg)
thanks to https://github.com/uberllama

Rails 4 fomatting with slim error

i have this simple erb code that works perfectly, joint to my i18n.yml file. The idea is to get the client's edit.html.erb page, get the title of that page in my en.yml file and pass that title the #client.fullname variable. Like so:
<h1><%= t('client.edit.title'), :current_client => #client.fullname %></h1>
Now, i'm in the process of translating my erb files into slim. So the result of that line of code is
h1 = t('client.edit.title'), :current_client => #client.fullname
But it wouldnt pass the variable to the en.yml file. Instead, it throws this error:
/app/views/clients/edit.html.slim:1: syntax error, unexpected ',', expecting ')' ...tty2 = (t('client.edit.title'), :current_client => #client.f... ... ^
Would anyone know what i'm doing wrong here?
The hash parameter options should be passed within the method call parentheses, like so
h1 = t('client.edit.title', :current_client => #client.fullname)
Not sure why this would have worked in ERB, but it doesn't look correct as written.
You can also remove the parentheses altogether
h1 = t 'client.edit.title', :current_client => #client.fullname
Please try:
h1
= t('client.edit.title'), :current_client => #client.fullname

Perl Regex on a mechanize->content page

I am fiddling around in perl and I managed to retrieve a HTML page from a source. However I just want to retrieve 1 particulair line. The line starts with a date formatted as follow: dd/mm/YYYY.
The HTML is in displayed with print $resp->content(); $resp being a response from a $mechanice->submit_form();
This is where the resp is made:
my $resp = $m->submit_form(
//bunch of data
},
);
How do I achieve this? I am familiar with PHP but I just started with Perl.
Thanks
Here's an example from some Mechanize code that I have.
my $mech = WWW::Mechanize->new();
$mech->get("url that takes you to the page with the form");
$mech->submit_form(form_name => 'someform',
fields => {'user_name' => 'user's
'password' => 'password'},
button => 'submit');
return if not $mech->success();
my $content = $mech->content();
if ($content =~ m|(\d{2,2}/\d{2,2}/\d{4,4}.*)|g) {
print "My line: $1\n";
}

Parsing a big json then multiple small jsons

My english is not the best, but I will try to explain what I want. So, we have a big json which has a lot of data, now because the server cant show us all the data at the same time, there will be multiple links to small json inside this big json, like when there are 20 comment on a page, and after that there is a "Show more comments" button. My problem is that I dont know how to do this, how to request the smaller json, the parse it. Another question is how to parse a json which links at the end to another json, something like when requestiong a json from graph api and at the end it has
[paging] => stdClass Object
(
[previous] => https://graph.facebook.com/$group_id$/feed?access_token=$valid_token$&__paging_token=$paging_token$&__previous=1
[next] => https://graph.facebook.com/$group_id$/feed?access_token=$valid_token$&limit=25&until=1375310522&__paging_token=$paging_token$
)
When the link from the [next] is opened, it shows more post from that group, then again at its end there is a next link.
As for the first question, I have a bit longer example from the facebook graph api comments
[comments] => stdClass Object
(
[data] => Array
(
[0] => stdClass Object
(
[id] => 53575265890127
[from] => stdClass Object
(
[name] => Pop Dan
[id] => 10000897827962
)
[message] => Random message
[can_remove] => 1
[created_time] => 2013-08-18T20:01:44+0000
[like_count] => 0
[user_likes] =>
)
... more coments...
[paging] => stdClass Object
(
[cursors] => stdClass Object
(
[after] => NTM1ODIxODE5ODA2NTQ0
[before] => NTM1NzUyNjU2NDgwMTI3
)
next] => https://graph.facebook.com/$group_id$_$post_id$/comments?access_token=$accestoken$&limit=25&after=NTM1ODIxODE5ODA2NTQ0
)
)
There I want to continue parsing the json from the [next] and then print it in the same html page, without any breaks or anything between comments. Thanks ;)
Some of the details depend on the language you are using for querying graph api.
Json returned from graph api can consist of json arrays and json objects. Json objects can further have named value entities
If you are doing it in android then first of all you have to get the inner json object from the response first. you can do it in this way
GraphObject go = response.getGraphObject();
JSONObject jso = go.getInnerJSONObject();
Now to get an object from the json you can use
jso.getJSONObject("object_name");
and to get json array you can use
jso.getJSONArray("array_name")
other languages will have similar interfaces
For general understanding of json refer to this link
Regarding your second question you should understand that next is just another query to the graph node but with different parameters. How the parameters can be set depends on the api but in Android you can pur parameters as follows
Bundle params=r.getParameters();
params.putString("offset", ""+25);
r.setParameters(params);
r is the object of type Request