I have the following CSS code
body {
background: white;
}
#media screen and (min-device-width: 980px) /* Desktop */ {
body {
background: red;
}
}
How to emulate device size with puppeteer? page.setVieport() or --window-size don't work because it emulates viewport size (not device).
You can emulate a device using the page.emulate function.
Example from the docs:
const puppeteer = require('puppeteer');
const iPhone = puppeteer.devices['iPhone 6'];
puppeteer.launch().then(async browser => {
const page = await browser.newPage();
await page.emulate(iPhone);
await page.goto('https://www.google.com');
// other actions...
await browser.close();
});
If you don't find your device on the devices list you can build your own and passing that to emulate.
You have to use deviceScaleFactor to scale the device width in relation to the viewport.
Here is an example for the viewport of the iPhone 4:
{
'width': 320,
'height': 480,
'deviceScaleFactor': 2,
// ...
}
In this example, the deviceWidth of the device is emulated as 320 although the true width of the viewport (actual number of pixels of the device) is 640 (320 * 2). That means, provide the device width to page.emulate and use deviceScaleFactor to scale the viewport accordingly.
Alternatively, you can also use one of the many already existing devices (via puppeteer.devices) when calling page.emulate.
Related
I want capture the output of an MTKView via the view's texture into a CIImageAccumulator to achieve a gradual painting build up effect. The problem is that the accumulator seems to be messing with the color/alpha/colorspace of the original, as shown below:
From the image above, the way I capture the darker-looking brushstroke is via the view's currentDrawable.texture property:
lastSubStrokeCIImage = CIImage(mtlTexture: self.currentDrawable!.texture, options: nil)!.oriented(CGImagePropertyOrientation.downMirrored)
subStrokeUIView.image = UIImage(ciImage: lastSubStrokeCIImage)
Now, once I take the same image and pipe it into a CIIAcumulator for later processing (I only do this once per drawing segment), the result is the brighter-looking result shown in the upper portion of the attachment:
lazy var ciSubCurveAccumulator: CIImageAccumulator =
{
[unowned self] in
return CIImageAccumulator(extent: CGRect(x: 0, y: 0, width: self.frame.width * self.contentScaleFactor, height: self.frame.height * self.window!.screen.scale ) , format: kCIFormatBGRA8)
}()!
ciSubCurveAccumulator.setImage(lastSubStrokeCIImage)
strokeUIView.image = UIImage(ciImage: ciSubCurveAccumulator.image())
I have tried using a variety of kCIFormats in the CIImageAccumulator definition, all to no avail. What is the CIImageAccumulator doing to mess with the original, and how can I fix it? Note that I intend the use ciSubCurveAccumulator to gradually build up a continuous brushstroke of consistent color. For simplicity of the question, I'm not showing the accumulating part. This problem is stopping me dead on my tracks.
Any suggestions would kindly be appreciated
The problem came down to two things: one, I needed to set up MTLRenderPipelineDescriptor() for compositeOver compositing and two, I needed to set introduce a CIFilter to hold intermediate compositing over the accumulating CIImageAccumulator. This CIFilter also needed to be set up to CIOverCompositing. Below is a snippet of code that captures all of the above:
lazy var ciSubCurveAccumulator: CIImageAccumulator =
{
[unowned self] in
return CIImageAccumulator(extent: CGRect(x: 0, y: 0, width: self.frame.width * self.contentScaleFactor, height: self.frame.height * self.window!.screen.scale ) , format: kCIFormatBGRA8)
}()! // set up CIImageAccumulator
let lastSubStrokeCIImage = CIImage(mtlTexture: self.currentDrawable!.texture, options: nil)!.oriented(CGImagePropertyOrientation.downMirrored)
let compositeFilter : CIFilter = CIFilter(name: "CISourceOverCompositing")!
compositeFilter.setValue(lastSubStrokeCIImage, forKey: kCIInputImageKey) // foreground image
compositeFilter.setValue(ciSubCurveAccumulator.image(), forKey: kCIInputBackgroundImageKey) // background image
let bboxChunkSubCurvesScaledAndYFlipped = CGRect(...) // capture the part of the texture that was drawn to
ciSubCurveAccumulator.setImage(compositeFilter.value(forKey: kCIOutputImageKey) as! CIImage, dirtyRect: bboxChunkSubCurvesScaledAndYFlipped) // comp bbox with latest updates
Wrapping the above bit of code inside draw() allows gradual painting to accumulate quite nicely. Hopefully this helps someone at some point.
I have Qt Quick Controls 2 Application. In main.qml I have besides other things canvas in scroll view:
Rectangle {
id: graph
width: mainArea.width / 3 - 14;
height: mainArea.height - 20;
ScrollView{
anchors.fill: parent;
Canvas {
id:canvasGraph;
width: graph.width;
height: graph.height;
property bool paintB: false;
property string colorRect: "#FFFF40";
property string name: "ELF header";
property int paintX: 0;
property int paintY: 0;
property int widthP: 160;
property int heightP: 30;
property int textX: (paintX + (widthP / 2)) - 15/*func return int length of text*/;
property int textY: (paintY + (heightP / 2)) + 3;
onPaint:{
if (paintB){
var ctx = canvasGraph.getContext('2d');
ctx.beginPath();
ctx.font = "normal 12px serif";
ctx.fillStyle = colorRect;
ctx.strokeRect(paintX, paintY, widthP, heightP);
ctx.fillRect(paintX, paintY, widthP, heightP);
ctx.strokeText("ELF header", textX, textY);
ctx.closePath();
ctx.save();
}
}
MouseArea{
id: canvasArea;
anchors.fill: parent;
onPressed: {
paint(mouseX,mouseY,"aaa",1);
}
}
}
}
}
At first I tried draw into canvas by js function, here:
function paint(x, y, name, type) {
canvasGraph.paintB = true;
canvasGraph.paintX = x;
canvasGraph.paintY = y;
canvasGraph.requestPaint();
}
This function was called by pressing mouse on canvas. It works good, it draw rectangles, one by one. But only one problem was, that after resizing app window, all rectangles except last one get lost. But it's not primary problem, because it works and this promblem I could resolve later.
For drawing chart I need C++ library (ELFIO, for reading ELF files). So in main.cpp I have two object. First allows me call from main.qml functions of some C++ class. Second allows me calling js functions from C++. Here is main.cpp:
QCoreApplication::setAttribute(Qt::AA_EnableHighDpiScaling);
QGuiApplication app(argc, argv);
QScopedPointer<elfFile> elfFileObj(new elfFile);
QQmlApplicationEngine engine;
engine.load(QUrl(QLatin1String("qrc:/main.qml")));
engine.rootContext()->setContextProperty("elfFileObj", elfFileObj.data()); //this is for calling c++ from qml
QObject *rof = engine.rootObjects().first();
elfFileObj.data()->rofS = rof; //this one is for calling js func from c++
return app.exec();
How you can see, reading ELF files is manage from object elfFileObj, where is public variable which holds loaded ELF file and variable rofS which hold object for access to main.qml to js functions.
In elfFileObj is Q_INVOKABLE int loadELF(QString fileName); where Q_INVOKABLE is macro, which ensure, that this function is possible call from qml file. Function:
int elfFile::loadELF(QString fileName)
{
string fileNameReal = (fileName).toStdString().substr(7);
if (!reader.load(fileNameReal.c_str())){
return -1;
}
QVariant x(30);
QVariant y(10);
QVariant name("ELF header");
QVariant type(1);
QMetaObject::invokeMethod(rofS, "paint", Q_ARG(QVariant,x), Q_ARG(QVariant,y), Q_ARG(QVariant,name), Q_ARG(QVariant,type));
y = QVariant(40);
QMetaObject::invokeMethod(rofS, "paint", Q_ARG(QVariant,x), Q_ARG(QVariant,y), Q_ARG(QVariant,name), Q_ARG(QVariant,type));
}
I try draw two rectangles, one by one. QMetaObject::invokeMethod should call js functions, which draw rectangle on (x,y). Other args are in this moment unusable.
Main problem: It draw rectangles on canvas, but after every call by invokeMethod is canvas cleared. So on the canvas always stay only last one rectangle.
Have somebody any idea, how to save actual state of canvas? Thanks for any help.
It isn't pretty code, but it's my first experience with qml.
The canvas, being an imperative drawing API, just has a dumb buffer of pixel data. It has no concept of objects like rectangles or anything else once the draw call has finished. As such, you are responsible for everything that it displays via your onPaint handler. It does not clear the canvas content from one frame to another (as an optimization), but it will (by necessity) clear it when you resize the window, as it has to allocate a differently sized buffer.
You can see this behaviour here:
import QtQuick 2.6
Canvas {
id: canvasGraph;
width: 500
height: 500
property int paintY: 10;
onPaint:{
var ctx = canvasGraph.getContext('2d');
ctx.beginPath();
ctx.font = "normal 12px serif";
ctx.fillStyle = "#ff0000";
ctx.strokeRect(10, paintY, 160, 30);
ctx.fillRect(10, paintY, 160, 30);
ctx.closePath();
ctx.save();
}
Timer {
interval: 16
running: true
repeat: true
onTriggered: {
canvasGraph.requestPaint();
canvasGraph.paintY += 10
if (canvasGraph.paintY > canvasGraph.height)
canvasGraph.paintY = 10
}
}
}
Try run this example with qmlscene and resize the window. You'll notice that all content is cleared on resize, except that one single rectangle that it draws.
So, if you want all your rectangles to be retained, then you need to paint them in the onPaint handler each time (and make use of clearRect or some other method to fill the background to get rid of stuff that doesn't belong there anymore, if you are moving stuff around or making them invisible).
On the other hand, I can't directly explain why invokeMethod would be causing it to clear, as you haven't really presented enough code. It may be that it's resizing the canvas (causing the buffer to reallocate, and be cleared). Either way, given the above, I'd say that it isn't all that relevant.
After all this, while I don't have full background over what you are making, I'd suggest that perhaps Canvas might not be the best tool to do what you want. You might want to look into QQuickPaintedItem instead, or (better still) composing your scene using a custom QQuickItem which positions other QQuickItems (or QSGNodes). Canvas (and QQuickPaintedItem) while easy to use, are not especially performant.
I didn't solved this problem. I just stop using QML and return to use just Qt. It helps me.
I have a camera app which allows the user to both take pictures and record video. The iPhone is attached to a medical otoscope using an adapter, so the video that is captured is very small (about the size of a dime). I need to be able to zoom the video to fill the screen, but have not been able to figure out how to do so.
I found this answer here on SO that uses ObjC but have not had success in translating it to Swift. I am very close but am getting stuck. Here is my code for handling a UIPinchGestureRecgoznier:
#IBAction func handlePinchGesture(sender: UIPinchGestureRecognizer) {
var initialVideoZoomFactor: CGFloat = 0.0
if (sender.state == UIGestureRecognizerState.began) {
initialVideoZoomFactor = (captureDevice?.videoZoomFactor)!
} else {
let scale: CGFloat = min(max(1, initialVideoZoomFactor * sender.scale), 4)
CATransaction.begin()
CATransaction.setAnimationDuration(0.01)
previewLayer?.transform = CGAffineTransform(scaleX: scale, y: scale)
CATransaction.commit()
if ((captureDevice?.lockForConfiguration()) != nil) {
captureDevice?.videoZoomFactor = scale
captureDevice?.unlockForConfiguration()
}
}
}
This line...
previewLayer?.transform = CGAffineTransform(scaleX: scale, y: scale)
... gives me the error 'Cannot assign value of type 'CGAffineTransform' to type 'CGTransform3D'. I'm trying to figure this out but my attempts to fix this have been unfruitful.
Figured it out: Changed the problematic line to:
previewLayer?.setAffineTransform(CGAffineTransform(scaleX: scale, y: scale))
I am looking for a more efficient way to constrain/set text in Raphael.
I have text that can be written in a box. That text should be centered (based on a number that could change if user wanted to shift text left/right) and the text cannot go beyond the boundaries of the paper.
This is what I do now and its not manageable performance wise
// Build a path
var path = this.paper.print(x, x, text, font, size, 'middle')
// Center line by getting bounding box and shifting to half of that
var bb = path.getBBox()
path.transform('...T' + [-bb.width / 2, 0])
// Compare paper size vs bb
// if it goes beyond I adjust X and Y accordingly and redo the above
So ideally I would like to predict the size of the text before it prints - I am not sure this is possible though as it is probably font dependent. I have looked for a command to contrain text but do not see one?
The other thought I had was to create some kind of shadow paper that does not print to screen and use that to determine size before I render to user. I am not sure where the lag is though - if it's in the screen rendering good but if its in the general logic of creating the svg then that wont help.
I'd appreciate suggestions
You can use opentype.js for measuring text, also you can get svg path by using Path.toPathData method. There is no need of cufon compiled js fonts.
For text measure, create a canvas element somewhere in your DOM.
<canvas id="canvas" style="display: none;"></canvas>
this function will load text and count width, height
function measureText(text, font_name, size) {
var dfd = $.Deferred();
opentype.load('/fonts/' + font_name + '.ttf', function (err, font) {
if (err) {
dfd.reject('Font is not loaded');
console.log('Could not load font: ' + err);
} else {
var canvas = document.getElementById('canvas');
var ctx = canvas.getContext('2d');
ctx.clearRect(0, 0, canvas.width, canvas.height);
var tmp_path = font.getPath(text, 100, 100, size);
tmp_path.draw(ctx); // Draw path on the canvas
var ascent = 0;
var descent = 0;
var width = 0;
var scale = 1 / font.unitsPerEm * size;
var glyphs = font.stringToGlyphs(text_string);
for (var i = 0; i < glyphs.length; i++) {
var glyph = glyphs[i];
if (glyph.advanceWidth) {
width += glyph.advanceWidth * scale;
}
if (i < glyphs.length - 1) {
var kerningValue = font.getKerningValue(glyph, glyphs[i + 1]);
width += kerningValue * scale;
}
ascent = Math.max(ascent, glyph.yMax);
descent = Math.min(descent, glyph.yMin);
}
return dfd.resolve({
width: width,
height: ascent * scale,
actualBoundingBoxDescent: descent * scale,
fontBoundingBoxAscent: font.ascender * scale,
fontBoundingBoxDescent: font.descender * scale
});
}
});
return dfd.promise();
},
then use this function to get text dimensions:
$.when(measureText("ABCD", "arial", 48)).done(function(res) {
var height = res.height,
width = res.width;
...
});
I have seen graphs in Flash & stuff that basically adapt nicely to whatever the size of the browser or flexible element they are inside of.... I'm not really too well versed with raphaelJS but can you do this, and if so, how?
In raphaeljs, you can call .setSize on a Raphael object to update its size. To adapt to runtime changes in browser window size, you can respond to a window resize event. Using jQuery, you could do:
// initialize Raphael object
var w = $(window).width(),
h = $(window).height();
var paper = Raphael($("#my_element").get(0), w,h);
$(window).resize(function(){
w = $(window).width();
h = $(window).height();
paper.setSize(w,h);
redraw_element(); // code to handle re-drawing, if necessary
});
This will get you a responsive SVG
var w = 500, h=500;
var paper = Raphael(w,h);
paper.setViewBox(0,0,w,h,true);
paper.setSize('100%', '100%');
Normally you could set the width to %100 and define a viewBox within SVG. But, Raphael JS manually set a width and height directly on your SVG elements, which kills this technique.
Bosh's answer is great, but it was distorting the aspect ratio for me. Had to tweak a few things to get it working correctly. Also, included some logic to maintain a maximum size.
// Variables
var width;
var height;
var maxWidth = 940;
var maxHeight = 600;
var widthPer;
function setSize() {
// Setup width
width = window.innerWidth;
if (width > maxWidth) width = maxWidth;
// Setup height
widthPer = width / maxWidth;
height = widthPer * maxHeight;
}
var paper = Raphael(document.getElementById("infographic"), 940, 600);
paper.setViewBox(0, 0, 940, 600, true);
window.onresize = function(event) {
setSize();
paper.setSize(width,height);
redraw_element();
}