delete commits permanently from fossil repository - fossil

As the timeline below shows, I forgot to add prior to commit and produced two checkins ...4a and ...3d. Then checked out ...97, repeated the change and committed with --allow-fork ...85. How can I get rid of checkins ...4a and ...3d?
user#PC:~/blinky$ fossil timeline
=== 2019-11-18 ===
17:00:08 [199f536985] *CURRENT* Move LED functions to BSP (user: user tags: trunk)
16:45:55 [b44070073d] Move LED functions to BSP (user: user tags: trunk)
16:44:03 [8e1bd6364a] Move LED functions to BSP (user: user tags: trunk)
15:35:02 [b9c7685997] *FORK* blah blah
15:16:11 [d1629aa3fa] Initial commit (user: user tags: trunk)
14:39:08 [7fe2388931] initial empty check-in (user: user tags: trunk)
+++ no more data (6) +++
PS: stumbled upon Rebase Considered Harmful, explaining the downside of "cleaner history".

Related

How can I specify global and local chunk options for a quarto pdf book?

I am trying to create a PDF book using Quarto. In particular, I want to know if there is a way to specify the global chunk options, which would be used for all the chunks in all the qmd pages. The global option is
echo = TRUE
(I think this syntax is in R Markdown, not Quarto.) But only for some code chunks in some of the pages, I want to hide the code (by setting echo = FALSE). How can I do this?
Setting Global Options
In quarto documents, books or presentation, to set options like echo, eval, warning, error, include, output globally, we need to specify them under execute yaml key.
So in quarto book, to set echo to true globally, we set echo: true under execute.
_quarto.yml
project:
type: book
execute:
echo: true
book:
title: "Quarto book"
author: "Jane Doe"
date: "8/7/2022"
chapters:
- index.qmd
- intro.qmd
- summary.qmd
- references.qmd
bibliography: references.bib
format:
pdf:
documentclass: scrreprt
Setting local options
And setting these options locally (for some specific chunk in a specific qmd page) is same as r-markdown.
Say in my index.qmd, I can specify echo=FALSE for a chunk like this,
index.qmd
# Preface {.unnumbered}
This is a Quarto book.
To learn more about Quarto books visit <https://quarto.org/docs/books>.
```{r echo=FALSE}
1 + 1
```

RSpec/Capybara "expect(page).to have_content" test fails while content is in page.html

I am writing a Rails app applying BDD using RSpec & Capybara. One of my tests continues to fail. The goal of the test is to check whether for each Machine record displayed on the index page, clicking on the edit link, results in visualising the details edit page. When I run my application, this functionality works. So I guess, there's something wrong with my RSpec scenario.
Here's the failing test:
Failures:
1) existing machines have a link to an edit form
Failure/Error: expect(page).to have_content(#existing_machine.model)
expected to find text "RX22" in "Toggle navigation uXbridge Catalogue Settings Brands Machine Types Machine Groups Repair States Titles User Signed in as john#example.com Sign out Machine details Brand TORO Model Machine type ZITMAAIER Description Engine Purchase Price Unit Price VAT Minimal Stock Current Stock Warehouse Location"
# ./spec/features/machine_spec.rb:50:in `block (2 levels) in <top (required)>'
Here's the code of the test:
RSpec.feature 'existing machines' do
before do
#john = User.create!(email: 'john#example.com', password: 'password')
login_as #john
brand = Brand.create!(name: 'TORO')
machinegroup = Machinegroup.create!(name: 'GAZON')
machinetype = Machinetype.create!(name: 'ZITMAAIER', machinegroup_id: machinegroup.id)
#existing_machine = Machine.create!(brand_id: brand.id, model: 'RX22', machinetype_id: machinetype.id, description: 'fantastic machine', engine: '100PK' )
end
scenario 'have a link to an edit form' do
visit '/machines'
find("a[href='/machines/#{#existing_machine.id}/edit']").click
expect(page).to have_content('Machine details')
expect(page).to have_content(#existing_machine.model)
end
end
When debugging the scenario, the #existing_machine object seems correctly populated through the .create!() method in the before do block.
screenshot of debug window in IDE
When inspecting the page.html in the debugger, I do see the "RX22" string appearing.
screenshot of page.html inspection
So why does RSpec/Capybara not see the same content when executing the expect(page).to have_content(#existing_machine.model)?
RX22 is the value of an input element not text content so you need to check for it differently. Something like
expect(page).to have_field('Model', with: 'RX22')
should work

Capybara Poltergeist feature test failing on CI but passes locally

I'm having weird issues with some of my feature tests using Capybara with poltergeist driver.
The test should perform a simple checkout in my online shop.
They all pass fine on my local MacBook as well as on an Ubuntu vagrant box. However on CI services like Codeship, Wercker or Semaphore they fail with the very same error.
My spec:
require 'rails_helper'
describe 'Checkout' do
let!(:product) { FactoryGirl.create(:product) }
it 'checks out via CreditCard', js: true do
visit products_path
expect(page.body).to have_link('Test Product 1')
click_link('Test Product 1')
#rest of spec ommitted
end
end
The error I get on CI is:
2) Checkout checks out via CreditCard
Failure/Error: click_link('Test Product 1')
Capybara::ElementNotFound:
Unable to find link "Test Product 1"
To me this is super weird, as the first expectation 'expect(page.body).to have_link('Test Product 1')' seems to pass but then it fails on the next step where it should actually click the link it just assured to be present on the page?
I then reconfigured poltergeist driver as follows to gather more debug information.
Snippet of rails_helper.rb:
Capybara.register_driver :poltergeist do |app|
Capybara::Poltergeist::Driver.new(app, {js_errors: false,
#inspector: true,
phantomjs_logger: Rails.logger,
logger: nil,
phantomjs_options: ['--debug=no', '--load-images=no', '--ignore-ssl-errors=yes', '--ssl-protocol=TLSv1'],
debug: true
})
end
Capybara.server_port = 3003
Capybara.app_host = 'http://application-test.lvh.me:3003' # lvh.me always resolves to 127.0.0.1
Capybara.javascript_driver = :poltergeist
Capybara.current_driver = :poltergeist
Capybara.default_wait_time = 5
Now I can see on CI console that the test successfully visits my products_path and the expected html page (including my the link it should click) is being returned.
I removed the rest of the HTML response to make it more readable:
{"name"=>"visit", "args"=>["http://application-test.lvh.me:3003/products"]}
{"response"=>{"status"=>"success"}}
{"name"=>"body", "args"=>[]}
{"response"=>"--- snip --- <div class=\"info\">\n<a class=\"name color-pomegranate\" href=\"/en/products/6\">\nTest Product 1\n</a>\n850,00 \n</div> --- snap ---"}
{"name"=>"find", "args"=>[:xpath, ".//a[./#href][(((./#id = 'Test Product 1' or normalize-space(string(.)) = 'Test Product 1') or ./#title = 'Test Product 1') or .//img[./#alt = 'Test Product 1'])]"]}
{"response"=>{"page_id"=>4, "ids"=>[0]}}
{"name"=>"visible", "args"=>[4, 0]}
{"response"=>false}
{"name"=>"find", "args"=>[:xpath, ".//a[./#href][(((./#id = 'Test Product 1' or contains(normalize-space(string(.)), 'Test Product 1')) or contains(./#title, 'Test Product 1')) or .//img[contains(./#alt, 'Test Product 1')])]"]}
{"response"=>{"page_id"=>4, "ids"=>[1]}}
{"name"=>"visible", "args"=>[4, 1]}
{"response"=>false}
The last two find actions repeat until Capybara reaches its timeout, then the test fails.
I double checked the xpath Capybara uses via some online xpath validators, but as expected it matches the HTML link.
I also used capybara-screenshot gem to dump the HTML body on failure and the link in question is also present.
So why is the test still failing?
Is there any race condition that I'm not aware of? Why is it passing locally but on none of the CI services?
Here are my gem version:
capybara (2.4.4)
capybara-screenshot (1.0.3)
database_cleaner (1.3.0)
factory_girl (4.5.0)
factory_girl_rails (4.5.0)
poltergeist (1.5.1)
rails (4.1.8)
rspec (3.1.0)
rspec-rails (3.1.0)
and phantomjs 1.9.7
While I can't reproduce this, I remember having this problem before. I believe your line:
expect(page.body).to have_link('Test Product 1')
is passing because the link is literally on the body of the html page, even though it may be hidden due to CSS or JS behavior. However, the line:
click_link('Test Product 1')
definitely checks for visibility before clicking the link. You should check your spec_helper.rb configurations to make sure:
Capybara.ignore_hidden_elements = true
is present, so that the first line wouldn't pass. I think I also had to change the first line I mentioned to:
# Change page.body to page, to look at the rendered page, not the literal one
expect(page).to have_link('Test Product 1')
Once you do this, the first line blocks the thread and waits until the link becomes visible. Then the rest of the test will pass.
Hope this solves it.

How would I change this to prevent numerous queries against the database to check the user role?

Last Updated: 29 Aug 2013 18:54 EST
I have the following module defined and then included into my model. I am using the rolify gem to give my users roles.
module Permissions::Offer
extend ActiveSupport::Concern
included do
# `user` is a context of security
protect do |user, offer|
# Admins can retrieve anything
if user.has_role? :administrator
scope { all }
# ... and view, create, update, or destroy anything
can :view
can :create
can :update
can :destroy
elsif user.present?
# Allow to read any field
can :view
can :create
# Checks offered_by_id keeping possible nil in mind
# Allow sellers to modify/delete their own offers
if offer.try(:offered_by_id) == user.id
can :update
can :destroy
end
else
# Guests can't read the text
cannot :view
end
end
end
end
What I am experiencing is that when I do the following...
respond_with Offer.restrict!(current_user)
It queries the roles table for every offer that is returned. Is there anyway to have it not make this request repeatedly when requesting a list of offers? I'm sure I could cache the response to avoid the database hit, but I'd rather it not hit the cache either.
If I open a rails console and do the following I get the same result:
current_user = User.first
Offer.restrict!(current_user).to_a
I have installed the bullet gem to see if it considers it an N+1 query, and it doesn't not detect it. I believe because the included gets called every time a new instance of offer gets created it fires off this call to verify permissions. That coupled with the fact that rolify does not cache it's user role checks for any length of time makes this less than ideal. I suppose rolify does this to allow for the changing of roles on the fly without having to deal with clearing the cache. As of now the only way I can see to solve this is to implement caching of my own.
I opened an issue with rolify to see if they are interested in creating a more permanent solution. For anyone else that encounters this, here's what I did int eh meantime.
def has_role?(role)
roles = Rails.cache.fetch(roles_for: { object_id: self.object_id }, expires_in: 10.seconds, race_condition_ttl: 2.seconds) { self.roles.map(&:name) }
roles.include?(role)
end
This doesn't do everything the real method does.. but it suits my purposes.
Here is a link to the source for anyone that wishes to implement something like this on all the methods.
https://github.com/EppO/rolify/blob/master/lib/rolify/role.rb

Uncommitted record still present after rollback()

I have the following in an ObjectController and have verified that both actions are called correctly:
setup: ->
transaction = #get('store').transaction()
post = transaction.createRecord(App.Post, {postedAt: new Date()})
#set('content', post)
cancel: ->
#get('content.transaction').rollback()
However, despite the transaction being rolled back the uncommitted record is still present in the data store.
Should I be handling created records differently in transactions?
Edit: I'm also seeing errors such as this after rolling back the transaction:
Error: Attempted to handle event `didSetProperty` on <App.Post:ember926:null> while in state rootState.deleted.saved. Called with {name: title}