I'm running my Golang service on Heroku. Very weirdly I have small hickups in that service that seem to be quite random. The service is simply written in Golang and does not interact with any database, storage solution or caching services.
I contacted the Heroku support already and they hinted me at this happening mainly during dyno-restart. They told me that albeit preboot is enabled when a dyno is recycled that the new dyno is coming up and takes traffic immediately and then the old one is shut off. Hence I would think that maybe the interruptions come from the Golang HTTP server not starting up fast enough? Here's how I run it:
s := &http.Server{
Addr: ":"+exposePort,
Handler: h,
ReadTimeout: time.Duration(timeout) * time.Second,
WriteTimeout: time.Duration(timeout) * time.Second,
}
err = s.ListenAndServe()
errs <- fmt.Errorf("Server error", err.Error())
if client != nil {
client.CaptureMessage(err.Error())
}
go func() {
c := make(chan os.Signal)
signal.Notify(c, syscall.SIGINT)
errs <- fmt.Errorf("%s", <-c)
}()
Is this a known issue or has someone experienced similar so far? Would really appreciate some help with that!
Related
I have an HTTP API Gateway with a HTTP Integration backend server on EC2. The API has lots of queries during the day and looking at the logs i realized that the API is returning sometimes a 503 HTTP Code with a body:
{ "message": "Service Unavailable" }
When i found out this, i tried the API and running the HTTP requests many times on Postman, when i try twenty times i get at least one 503.
I then thought that the HTTP Integration Server was busy but the server is not loaded and i tried going directly to the HTTP Integration Server and i get 200 responses all the times.
The timeout parameter is set to 30000ms and the endpoint average response time is 200ms so timeout is not a problem. Also the HTTP 503 is not after 30 seconds of the request but instantly.
Can anyone help me?
Thanks
I solved this issue by editing the keep-alive connection parameters of my internal integration server. The AWS API Gateway needs the keep alive parameters on a standard configuration, so I started tweaking my NGINX server parameters until I solved the issue.
Had the same issue on a selfmade Microservice with Node that was integrated into AWS API-Gateway. After some reconfiguration of the Cloudwatch-Logs I got further indicator on what is wrong: INTEGRATION_NETWORK_FAILURE
Verify your problem is alike - i.e. through elaborated log output
In API-Gateway - Logging add more output in "Log format"
Use this or similar content for "Log format":
{"httpMethod":"$context.httpMethod","integrationErrorMessage":"$context.integrationErrorMessage","protocol":"$context.protocol","requestId":"$context.requestId","requestTime":"$context.requestTime","resourcePath":"$context.resourcePath","responseLength":"$context.responseLength","routeKey":"$context.routeKey","sourceIp":"$context.identity.sourceIp","status":"$context.status","errMsg":"$context.error.message","errType":"$context.error.responseType","intError":"$context.integration.error","intIntStatus":"$context.integration.integrationStatus","intLat":"$context.integration.latency","intReqID":"$context.integration.requestId","intStatus":"$context.integration.status"}
After using API-Gateway Endpoint and failing consult the logs again - should be looking like that:
Solve in NodeJS Microservice (using Express)
Add timeouts for headers and keep-alive on express servers socket configuration when upon listening.
const app = require('express')();
// if not already set and required to advertise the keep-alive through HTTP-Response you might want to use this
/*
app.use((req: Request, res: Response, next: NextFunction) => {
res.setHeader('Connection', 'keep-alive');
res.setHeader('Keep-Alive', 'timeout=30');
next();
});
*/
/* ..you r main logic.. */
const server = app.listen(8080, 'localhost', () => {
console.warn(`⚡️[server]: Server is running at http://localhost:8080`);
});
server.keepAliveTimeout = 30 * 1000; // <- important lines
server.headersTimeout = 35 * 1000; // <- important lines
Reason
Some AWS Components seem to demand a connection kept alive - even if server responding otherwise (connection: close). Upon reusage in API Gateway (and possibly AWS ELBs) the recycling will fail because other-side most likely already closed hence the assumed "NETWORK-FAILURE".
This error seems intermittent - since at least the API-Gateway seems to close unused connections after a while providing a clean execution the next time. I can only assume they do that for high-performance and not divert to anything less.
I’ve been working for about 2 years developing full stack apps (which were fairly mature when I get to them initially) and am starting up my first from scratch. I spent the day spinning up an EC2 instance, loading the dependencies for my stack (Postgres, golang, react), and all the other frills a shiny new machine needs (tmux, npm, vim plugins, git integration etc). I’m now stuck.
I used create-react-app to get started on my front end and running npm start to have a build going. To see this, I can go to my instanceip:3000.
I created golang server code which is serving up on port 8080 and I can hit my helloworld code by going to instanceip:8080/hello.
I added an axios call in my react code to get endpoint /hello which returns a 404.
I also attempted to make my golang handle the index.js as a static page for the instanceip:8080 and that returns a 404.
How do I get my different ports to play nice? I can’t figure out what I’m missing here.
Here’s a snippet of my server code in main:
indexLoc := os.Genenv(webRootEnv) + "/index.html" // verified this is the correct path to my static index.html file by logging it out
fs := http.FileServer(http.Dir(indexLoc))
http.ListenAndServe(":8080", nil)
http.Handle("/", fs)
Anyone have any ideas what I’m missing? Thanks in advance!
There are two things that could be the cause:
You're giving a file name to http.Dir, which expects a directory.
Your handler is registered after calling ListenAndServe, thus making it useless (http.Handle is never called)
You could fix it by changing your code like this:
indexLoc := os.Genenv(webRootEnv) // webRootEnv is a directory path
fs := http.FileServer(http.Dir(indexLoc))
http.Handle("/", fs) // register handler before listening
http.ListenAndServe(":8080", nil)
You might also want to handle the error returned from ListenAndServe to prevent silent failures of your program
I'm interested in using azure's DocumentDB, but I can't see how to sensibly develop, run unittests / integration tests, or how to have our continuous integration server run against it.
AFAICS there's no way to run a local version of the docdb server, you only run against a provisioned instance of docdb in azure.
This means that:
each developer must dev against their own provisioned instance of docdb
each time the developer runs integration tests it's against (their own) remote docdb
continuous integration: I have to assume there's a way to programatically provision another docdb instance for the build? Even then the CI server is running against the remote docdb
Any advice on how people are approaching this with docdb would be much appreciated.
You are correct that there is no version of DocumentDB that you run on your own computers. So, I write unit tests for all stored procedures (sprocs) using documentdb-mock (runs client side on node.js). I do test first design (TDD) with this client side testing which has no requirement for connecting to Azure, but it only tests sprocs.
I run a number of other tests on the live Azure platform. In addition to the client-side tests, I test sprocs live with a real documentdb collection. I also test all client-side SDK code (only used for reads as I do all writes in sprocs) on the live system.
I used to have a single collection per developer for live testing but the fact each test can't guarantee the state of the database meant that some tests failed intermittently, so I switched to creating and deleting a database and collection for each test. It's slightly slower but not as slow as you would expect. I use node-unit and below is my setup and tear down code. Some points about this code:
I preload all sprocs every time since I use sprocs for all writes. I only use the client-side SDK for reads. You could skip this if you don't use sprocs.
I am using the documentdb-utils WrappedClient because it provides some added functionality (429 retry, better async API, etc.). It's a drop in replacement for the standard library (although it does not yet support partitioned collections) but you don't need to use it for the example code below to work.
The delay in the tear down was added to fix some intermittent failures that occurred when the collection was removed but some operations were still pending.
Each test file looks like this:
path = require('path')
{DocumentClient} = require('documentdb')
async = require('async')
{WrappedClient, loadSprocs, getLinkArray, getLink} = require('documentdb-utils')
client = null
wrappedClient = null
collectionLinks = null
exports.underscoreTest =
setUp: (setUpCallback) ->
urlConnection = process.env.DOCUMENT_DB_URL
masterKey = process.env.DOCUMENT_DB_KEY
auth = {masterKey}
client = new DocumentClient(urlConnection, auth)
wrappedClient = new WrappedClient(client)
client.deleteDatabase('dbs/dev-test-database', () ->
client.createDatabase({id: 'dev-test-database'}, (err, response, headers) ->
databaseLink = response._self
client.createCollection(databaseLink, {id: '1'}, {offerType: 'S2'}, (err, response, headers) ->
collectionLinks = getLinkArray(['dev-test-database'], [1])
scriptsDirectory = path.join(__dirname, '..', 'sprocs')
spec = {scriptsDirectory, client, collectionLinks}
loadSprocs(spec, (err, result) ->
sprocLink = getLink(collectionLinks[0], 'createVariedDocuments')
console.log("sprocs loaded for test")
setUpCallback(err, result)
)
)
)
)
test1: (test) ->
...
test.done()
test2: (test) ->
...
test.done()
...
tearDown: (callback) ->
f = () ->
client.deleteDatabase('dbs/dev-test-database', () ->
callback()
)
setTimeout(f, 500)
A local version from a DocumentDB is now available : https://learn.microsoft.com/en-us/azure/documentdb/documentdb-nosql-local-emulator
I'm building a chat application leveraging ejabberd as the server, with Riak as the backend NoSQL db (on AWS). I could get a single-node ejabberd and a Riak cluster working correctly separately but somehow not able to have chat data pushed onto the database by ejabberd.
As a first shot, I want to store offline messages in Riak. I've written a simple ejabberd module (mod_offline_riak) attaching to the offline_message_hook. This gets called successfully when an offline message is sent, but the moment the riak connection is made (in riakc_pb_socket:start_link), I get an undef error in the ejabberd logs. Relevant code snippets pasted below.
Furthermore, the ejabberd default installation (from code, v15.04) does not contain the riak-erlang-client dependency, so I've even included that in the ejabberd rebar.config.script and done a re-make / re-install but to no help.
start(_Host, _Opt) ->
?INFO_MSG("Starting module mod_offline_riak ...", []),
ejabberd_hooks:add(offline_message_hook, _Host, ?MODULE, save_message, 0),
ok.
save_message(From, To, Packet) ->
?INFO_MSG("Entered function save_message ...", []),
create_riak_object(To, Packet),
create_riak_object(To, Packet) ->
?INFO_MSG("Entered function create_riak_object ...", []),
{ok, Pid} = riakc_pb_socket:start_link("***IP of one of the Riak nodes***", 8087),
PollToBeSaved = riakc_obj:new(?DATA_BUCKET, To, Packet),
riakc_pb_socket:put(Pid, PollToBeSaved),
ok.
The error in the ejabberd log is:
2015-12-28 16:06:02.166 [error] <0.503.0>#ejabberd_hooks:run1:335 {undef,
[{riakc_pb_socket,start_link,["***Riak IP configured in the module***",8087],
[]},{mod_offline_riak,create_riak_object,2,[{file,"mod_offline_riak.erl"},
{line,39}]},{mod_offline_riak,save_message,3,[{file,"mod_offline_riak.erl"},
{line,23}]},{ejabberd_hooks,safe_apply,3,[{file,"src/ejabberd_hooks.erl"},
{line,385}]},{ejabberd_hooks,run1,3,[{file,"src/ejabberd_hooks.erl"},{line,332}]},
{ejabberd_sm,route,3,[{file,"src/ejabberd_sm.erl"},{line,115}]},
{ejabberd_local,route,3,[{file,"src/ejabberd_local.erl"},{line,112}]},
{ejabberd_router,route,3,[{file,"src/ejabberd_router.erl"},{line,74}]}]}
Afraid I've been struggling with this for the last few days and still learning my steps around Erlang / Riak, so appreciate any help here.
On a slight tangential, I plan to allow embedding of media attachments too in the chat messages too - I presume the recommendation would be to instead use Riak CS instead of Riak - I'll be leveraging S3 in the background.
Finally, is there any good ejabberd / Riak / Redis integration material that I can refer that folks are aware of? I understand there was recently a talk in London but I'm based in NY, so missed that... :-(
Thanks again for all your help...
undef means the module/function is not available. Presumably, you do not have build the riakc_pb_socket module or the beam file is not in your Erlang path.
I'm trying to figure out how to shut down an instance of Express. Basically, I want the inverse of the .listen(port) call - how do I get an Express server to STOP listening, release the port, and shutdown cleanly?
I know this seems like it might be a strange query, so here's the context; maybe there's another way to approach this and I'm thinking about it the wrong way. I'm trying to setup a testing framework for my socket.io/nodejs app. It's a single-page app, so in my testing scripts (I'm using Mocha, but that doesn't really matter) I want to be able to start up the server, run tests against it, and then shut the server down. I can get around this by assuming that either the server is turned on before the test starts or by having one of the tests start the server and having every subsequent test assume it's up, but that's really messy. I would much prefer to have each test file start a server instance with the appropriate settings and then shut that instance down when the tests are over. That means there's no weird dependencies to running the test and everything is clean. It also means I can do startup/shutdown testing.
So, any advice about how to do this? I've thought about manually triggering exceptions to bring it down, but that seems messy. I've dug through Express docs and source, but can't seem to find any method that will shut down the server. There might also be something in socket.io for this, but since the socket server is just attached to the Express server, I think this needs to happen at the express layer.
Things have changed because the express server no longer inherits from the node http server. Fortunately, app.listen returns the server instance.
var server = app.listen(3000);
// listen for an event
var handler = function() {
server.close();
};
Use app.close(). Full example:
var app = require('express').createServer();
app.get('/', function(req, res){
res.send('hello world');
});
app.get('/quit', function(req,res) {
res.send('closing..');
app.close();
});
app.listen(3000);
Call app.close() inside the callback when tests have ended. But remember that the process is still running(though it is not listening anymore).
If after this, you need to end the process, then call process.exit(0).
Links:
app.close: http://nodejs.org/docs/latest/api/http.html#server.close (same applies for)
process.exit:
http://nodejs.org/docs/latest/api/process.html#process.exit
//... some stuff
var server = app.listen(3000);
server.close();
I have answered a variation of "how to terminate a HTTP server" many times on different node.js support channels. Unfortunately, I couldn't recommend any of the existing libraries because they are lacking in one or another way. I have since put together a package that (I believe) is handling all the cases expected of graceful HTTP server termination.
https://github.com/gajus/http-terminator
The main benefit of http-terminator is that:
it does not monkey-patch Node.js API
it immediately destroys all sockets without an attached HTTP request
it allows graceful timeout to sockets with ongoing HTTP requests
it properly handles HTTPS connections
it informs connections using keep-alive that server is shutting down by setting a connection: close header
it does not terminate the Node.js process
Usage with Express.js:
import express from 'express';
import {
createHttpTerminator,
} from 'http-terminator';
const app = express();
const server = app.listen();
const httpTerminator = createHttpTerminator({
server,
});
await httpTerminator.terminate();
More recent version of express support this solution:
const server = app.listen(port);
const shutdown = () => {
server.close();
}
You can easily do this by writing a bash script to start the server, run the tests, and stop the server. This has the advantage of allowing you to alias to that script to run all your tests quickly and easily.
I use such scripts for my entire continuous deployment process. You should look at Jon Rohan's Dead Simple Git Workflow for some insight on this.