I am using Django with Postgres.
My site servers half a million pages without any issue and everything works fine.
However I am using an API system and it works like below:
First party calls my site with API, my site gets data from third
website using API. My site extracts some data and pass it to First
party. It works perfectly. In this cycle I have to check my Postgres
whether data is already present or not.
Everything works fine. But if third party API does not respond or there is any server issue with third party it takes long time to respond with 404 error, my postgres just dies and I have to run service postgresql restart command everytime to site work.
What could be the issue? How do I check why Postgres is dying?
One possibility, although a guess, is that your code is locking the database table while the 3rd party API call is made. This will prevent other updates occurring while waiting.
This wouldn't explain why you would need to restart the Postgres server, the lock should be released after the 3rd party API call times out.
It might help to add to your question the code that deals with checking whether the data is already present in the db, calling the remote API, and finally updating the database with new data.
Related
My app uses a couple of different DBs on the same MSSQL server (read only). My problem is that one of the two MSSQL connections always works fine, whereas the other hangs indefinetely on the first query until Flask cuts off the connection. This, however, only happens most of the time when the app is running under Apache. When I run the flask test server, everything is fine.
I've surrounded the MSSQL query with logging messages and am therefore positive that the bug is in that particular query. It is just a simple lookup by primary key, like this:
db.query(Record).get(id)
The DBs are accessed through different engines whose URIs only differ by the database name.
My problem is that I have no idea on how to start debugging this. Any tips?
[EDIT] I've managed to get SQLAlchemy logging going under Apache. I've set echo=True on the engine, and it doesn't output anything at all. It just hangs.
Turns out that the problem doesn't have to do with Apache but with connection timeout. Whereas the MySQL engine gives an error message when trying to execute a query on an expired server connection, the MSSQL engine just silently stalls forever. Adding pool_recycle=3600 to create_engine() solved the issue.
So I found this here in the Docs:
Every client sharing a Firebase maintains its own internal version of any active data. When data is updated or saved, it is written to this local version of the Firebase. The Firebase client then synchronizes that data with the Firebase servers and with other clients on a 'best-effort' basis.
As a result, all writes to Firebase will trigger local events immediately, before any data has even been written to the server. This means the app will remain responsive regardless of network latency or Internet connectivity.
Once connectivity is reestablished, we'll receive the appropriate set of events so that the client "catches up" with the current server state, without having to write any custom code.
Yeah, I got that going for me, but to be more specific (and I wasn't able to find an answer to that):
I am using the REST-API in a c++-program, which executes a curl-request. Everything is working so far. For the insertion, this is not a big deal. In case of an error, I can easily store the data via redis or something and update them later, but how does reading work?
To give you a scenario:
I made a scanner, which recognizes an ID. After this process, it is inserted (as explained above) into the Firebase. People can also register on a regarding homepage (and insert their ID manually). This will be saved in the Firebase as well. Same node obviously.
Firebase is designed to provide the data from one endpoint to another by accessing the db, which is fine. The user registered on the site and is inserted in the DB. Suddenly, my internet connection went away.
Is there any way to get the last "stack" or "full dataset", which was used, before my connection went away? Is there a way to replicate the DB and queue jobs, which will sync after the connection is re-established?
Disclaimer: I work for Firebase.
The passage you're quoting above specifically refers to the client libraries that we maintain (currently in Objective-C, Java, and JavaScript) - which are pieces of code that we've written that you would run in your app.
In this case, you're specifically not using a client library - you're just hitting our regular REST endpoint, so you won't get any of the benefits. To implement your own client would be a significant undertaking; it's the client code that maintains the internal view of the data, compensates when it's offline, triggers local events, etc.
I am using Coldfusion MX8 server and one of the scheduled task was running from 2 years but now suddenly from 01/12/2014 scheduled tasks are not running. When i browsed the file in browser then the file is running successfully without error.
I am not sure is there any updatation or license expiration problem. I am aware that mid of this year Adobe closed the support for coldfusion 8.
The first most common problem of this problem is external to the server. When you say you browsed to the file and it worked in a browser, it is very important to know if that test was performed on the server desktop. Knowing that you can browse to the file from your desktop or laptop is of small value.
The most common source of issues like this is a change in the DNS or network stack that is interfereing with resolution. For example, if the internal DNS serving your DMZ suddenly starts serving the "external" address - suddenly your server can't browse to your domain. Or if the IP served by the server for the domain in question goes from being 127.0.0.1 to some other IP that the server can't acces correctly due to reverse proxy or LB or some other rule. Finally, sometimes the Apache or IIS is altered so that an IP that previously was serviced (127.0.0.1 being the most common example) now does not respond.
If it is something intrinsic to the scheduler service then Frank's advice is pretty good - especially look for "proxy schduler" entries in the log - they can give you good clues. I would also log results of a scheduled task to a file. Then check the file. If it exists then your scheduled tasks ARE running - they are just not succeeding. Good luck!
I've seen the cf scheduling service crash in CF8. The rest of CF is unaffected.
Have you tried restarting the server?
Here are your concerns:
Your File (works since you tested it manually).
Your Scheduled Task (failed).
Your Coldfusion Application (Service) (any changes here)?
Your Server (what about here).
To test your problem create a duplicate task and schedule it. Leave the other one in place (maybe set your new one to run earlier). Use the same file too. See if it completes.
If it doesn't then you have a larger problem. Since the Coldfusion Server sits atop of the JVM there could be something happening there. Things just don't stop working unless something got corrupted or you got compromised. If you hardened your server by rearranging/renaming the file structure to make it more secure...It would break your task.
So going back: if your test schedule works then determine what is different between the two. Note you have logging capabilities. Logging abilities for CF8
If you are not directly incharge of maintaining this server, then I would recommend asking around and see if there was recent maintenance, if so, what was done to the server?
We are using Django 1.3.1 and Postgres 9.1
I have a view which just fires multiple selects to get data from the database.
In Django documents it is mentioned, that when a request is completed then ROLLBACK is issued if only select statements were fired during a call to a view. But, I am seeing lot of "idle in transaction" in the log, especially when I have more than 200 requests. I don't see any commit or rollback statements in the postgres log.
What could be the problem? How should I handle this issue?
First, I would check out the related post What does it mean when a PostgreSQL process is “idle in transaction”? which covers some related ground.
One cause of "Idle in transaction" can be developers or sysadmins who
have entered "BEGIN;" in psql and forgot to "commit" or "rollback".
I've been there. :)
However, you mentioned your problem is related to have a lot of
concurrent connections. It sounds like investigating the "locks" tip
from the post above may be helpful to you.
A couple more suggestions: this problem may be secondary. The primary
problem might be that 200 connections is more than your hardware and
tuning can comfortably handle, so everything gets slow, and when things
get slow, more things are waiting for other things to finish.
If you don't have a reverse proxy like Nginx in front of your web app,
considering adding one. It can run on the same host without additional
hardware. The reverse proxy will serve to regulate the number of
connections to the backend Django web server, and thus the number of
database connections-- I've been here before with having too many
database connections and this is how I solved it!
With Apache's prefork model, there is 1=1 correspondence between the
number of Apache workers and the number of database connections,
assuming something like Apache::DBI is in use. Imagine someone connects
to the web server over a slow connection. The web and database server
take care of the request relatively quickly, but then the request is
held open on the web server unnecessarily long as the content is
dribbled back to the client. Meanwhile, the database connection slot is
tied up.
By adding a reverse proxy, the backend server can quickly delivery a
repliy back to the reverse proxy and then free the backend worker and
database slot.. The reverse proxy is then responsible for getting the
content back to the client, possibly holding open it's own connection
for longer. You may have 200 connections to the reverse proxy up front,
but you'll need far fewer workers and db slots on the backend.
If you graph the db slots with MRTG or similar, you'll see how many
slots you are actually using, and can tune down max_connections in
PostgreSQL, freeing those resources for other things.
You might also look at pg_top to
help monitor what your database is up to.
I understand this is an older question, but this article may describe the problem of idle transactions in django.
Essentially, Django's TransactionMiddleware will not explicitly COMMIT a transaction if it is not marked dirty (usually triggered by writing data). Yet, it still BEGINs a transaction for all queries even if they're read only. So, pg is left waiting to see if any more commands are coming and you get idle transactions.
The linked article shows a small modification to the transaction middleware to always commit (basically remove the condition that checks if the transaction is_dirty). I'll be trying this fix in a production environment shortly.
Here is my scenario: I have an iPhone app (written in Monotouch but that has nothing to do with the design) that has consumables. I give 25 free consumables when the app is installed. If the user deletes the app and re-installs it, he now gets the same 25 free consumables. I need a way to prevent this.
So I came up with the idea of a database on a server (my website host?), which would have a list of UDIDs. If the user's UDID is in the database (that means he has already installed the app) a response is sent back to the app to set the consumable count to zero. If the UDID is not in the d/b, then it is added and the response is so indicated (new app).
I am thinking of using REST (simpler) and a Linux host for the server side. My questions are:
Is there a better way of doing this?
What is the language of choice on the server?
What about sqlREST? (looks very good to me, but will it work in the above scenario?)
Well, I can tell you what MY language of choice would be: ASP.NET/C# in combination with an SQL Server DB. I have my website running at a hoster which offers this combination for just a few bucks per month.
You don't even need webservices. You could just setup an ASPX page on your server and call it using NSString.FromUrl (or whatever the method is called): "mycounter.aspx?udid=1234". Everytime the page gets called, it increases the count of the passed in device ID and the only thing it ever outputs is the number of remaining requests.
Your client parses that response to integer and if it is zero, informs the user.
You should proably add some hashing to make sure that evil Krumelur won't go to your URL and call it for random device IDs, rendering them unusable. :-)
René
The answer really depends on your web host. And what they support. That probably depends on your transaction volume and so on.
Since you are using Monotouch I'm going to assume you are comfortable in .net/c# world.
I would look at a WCF web service written in c#. This in turn would use SQL server for storage. Of course you could just go straight to a SQL server stored procedure.
sqlREST looks interesting but at a glance it looks like you need to be running the Appache + Tomcat stack for that to work.
if you just want the lowest possible bar to get it working then I agree with the other poster... ASP.NET + SQL server would get it done too.