I'm programming a C++ project that captures an image and automatically uploads it to an FTP server. The images are successfully being uploaded and I can see them by refreshing the FTP server web page.
My question is, whether is it possible to refresh the ftp server webpage without pressing the refresh button? Like to run a c++ code to refresh the specific page i'm working on?
The FTP protocol has no concept of notifications, so there is no way for an FTP server to make an FTP client automatically refresh a listing when the content of a directory changes. The client needs to re-poll the listing periodically. A web browser won't do that automatically, you would have to write/find a browser plugin to force it. Or stop using a web browser to display an FTP server's listing, use an actual FTP client instead, preferably one that has timer capabilities built-in.
Related
I am developing a Chrome extension that detects when a file is downloaded, sends it to a website (also created by me with Django), and after the website processes it, the Chrome extension receives another file with some results.
I would like to know how I could connect them. Does someone have any examples or names of some technology that allows Chrome extensions to send and receive files from a website or server? Thank you in advance!
Suppose I am using WinInet/WinHTTP for crawling a website. In the past I could simply ask a user to login to a website using either embedded IE control or the IE browser and WinInet would use the same cookies as the IE browser. Now that will not anymore as the Internet Explorer is getting old and removed very soon.
For whatever reason Edge browser does not wrap/use the Windows Internet settings / cookies storage... Does anyone have experience login through an embedded webview2 ... fetch cookes and transfer to WinInet? For purpose being here that you can use WinInet/WinHTTP to crawl the website in login'ed state.
Is it a feasible solution login through embedded WebView2 control and transfer all cookies to WinInet before issuing WinInet HTTP requests?
(I have added a Delphi specific tag (TEdgeBrowser) but I am intersted in hearing if the above described concept in general can be brought to work.)
We added a CoreWebView2.CookieManager to WebView2 so you should be able to enumerate all cookies in WebView2 and set them on to wininet or vice versa. WebView2 and Edge is based on chromium and has its own HTTP stack and state location so is not connected to wininet.
I am developing an application, which is using a mySQL database for login information's. For connecting to the database, I need the mySQL login data. I think, its a bad idea to distribute the mySQL login data with the app, so my question is, what is the common approach for that. I don't like to have a server running, which acts as a login server or whatever. So, how can you secure that login data in the application. Is it even possible, somehow? The user should not be able to read that login data out.
I read Hiding MySQL Credentials in Application, but they suggest a server running, which I would like to avoid...
Thanks in advance
Could you read the credentials from an external file local to the server? That way, you can distribute your application and include instructions to enter the correct credentials in a configuration file before running the application.
Can anybody explain exactly how a web beacon works? I know they're generally used by advertising platforms but i can't really find a good explanation on how they're working.
I know that cookies aren't accessible cross-domain. A web beacon is an image that sends a request to the server, and the server adds a cookie to the response, right? So how can it be accessed on different domains?
Thanks!
When an HTML page is downloaded the browser parses the page and looks for additional resources needed to display the page, such as images. For each image it finds the browser makes another request to a server in the background. When servers receive requests, they usually log the request to monitor load on the server, and record information about who sent the request and where it came from. A web beacon is a tiny invisible graphic that generates a request to the tracking firm's server. They record the request in their logs and then analyze their logs to see who went where and did what and when.
When returning the image from their servers to the browser, they can also send down information to be added to a cookie. There are third-party cookies that can be tracked across domains. If you come back to the site, and the beacon request is made again, that cookie will also be sent up in the request to the server and the tracking firm will have more information about you.
Think about this. Even though you are visiting myfavoritesite.com the web beacon image is being requested from trackers.com. The cookie they create is assigned/locked to their domain, trackers.com. But if you then surf over to myotherfavoritesite.com, and they too are sending web beacons to trackers.com, the cookie will essentially be shared between the two sites. There are more considerations here, but that is the basic premise.
Bug bug (also known as Web beacon) is very important tools commonly used by online advertiser as marketing or advertisement analysis tool for tracking and monitoring the activity of users on a website or marketing content i.e: blog or email. An expert advertiser inserts web bug in his content (usually on website and email) in order track how many people opened a particular content, on which application and country his content is being viewed. So, whenever advertisement display by third-party just know that you are being tracked for marketing analysis purpose.
Bug bug tools are provided freely or premium mostly by CRM service providers like Hubspot CRM, Freshsales CRM, Salesforce CRM, etc. However, a Web bug PHP code can also be used for this if tracking service by CRM provider is not available. Continue reading
And instead of going off and creating one using Php and Apache redirects, my vote is that you go to http://webbeak.com and create one, use it, and track it. No cost either.
I am building an application that has a web front end and a desktop client application. The web front end allows users to login using Facebook. It exposes a web service to the Desktop application for uploading data. The web service needs to insure that the desktop application uploading data for a user is really from that user. I have already implemented Facebook login on both the client and web interface using the Graph API. How would I go about using Facebook to validate that the user using the desktop application is who they say they are? Also, how can I make it so the user can login once and not need to login again on the desktop application?
After much struggle I figured out that it's possible to receive a session key that does not expire. You can do this by requesting offline access to a user's profile. I stored this in the database on the web side and retrieved and stored it on the desktop side. The only problem is the session key technically can be viewed and used by someone other than the user to make requests. Any other suggestion would be appreciated.