How to make Yesod listen on a specific IP? - yesod

I wrote a Yesod web application called myapp, and compiled it (using stack build) to myapp-exe. Now I want to deploy it, but I do not want to use Keter/scaffolding. The warp procedure allows me to specify a port to listen on, but not the IP. Is there any way to make the compiled executable myapp-exe to listen only on 127.0.0.1 ?

Warp provides a function called setHost:
https://s3.amazonaws.com/haddock.stackage.org/lts-5.18/warp-3.2.2/Network-Wai-Handler-Warp.html#v:setHost
You haven't shown how you're running your app, but I'm guessing you need to switch from run to runSettings.

Well, I should have researched some more before answering. The answer is to use runSettings from Network.Wai.Handler.Warp instead of simple warp:
let stts = setPort 12345 $ setHost "127.0.0.1" defaultSettings
runStderrLoggingT $ withSqlitePool dbfile 10 $ \pool -> liftIO $ do
waiApp <- toWaiApp $ MyApp pool
runSettings stts waiApp
This allows more settings, while warp was just a convenience wrapper.

Related

Next.js CLI - is it possible to pre-build certain routes when running dev locally?

I'm part of an org with an enterprise app built on Next.js, and as it's grown the local dev experience has been degrading. The main issue is that our pages make several calls to /api routes on load, and those are built lazily when you run yarn dev, so you're always forced to sit and wait in the browser while that happens.
I've been thinking it might be better if we were able to actually pre-build all of the /api routes right away when yarn dev is run, so we'd get a better experience when the browser is opened. I've looked at the CLI docs but it seems the only options for dev are -p (port) and -H (host). I also don't think running yarn build first will work as I assume the build output is quite different between the build and dev commands.
Does anyone know if this is possible? Any help is appreciated, thank you!
I don't believe there's a way to prebuild them, but you can tell Next how long to keep them before discarding and rebuilding. Check out the onDemandEntries docs. We had a similar issue and solved it for a big project about a year ago with this in our next.config.js:
const { PHASE_DEVELOPMENT_SERVER } = require("next/constants")
module.exports = (phase, {}) => {
let devOnDemandEntries = {}
if (phase === PHASE_DEVELOPMENT_SERVER) {
devOnDemandEntries = {
// period (in ms) where the server will keep pages in the buffer
maxInactiveAge: 300 * 1000,
// number of pages that should be kept simultaneously without being disposed
pagesBufferLength: 5,
}
}
return {
onDemandEntries,
...
}
}

How to allow a service listening and accepting sockets on a port using custom selinux policy?

I'm implementing a custom selinux policy (targeted) for a more or less typical systemd managed service (prometheus node_exporter). I've utilized "ausearch" in order to get all permissions which the service needs to function.
But when I call "curl localhost:9100/metrics" the response is "connection reset by peer". It turned out that this is caused by a missing permission. In order to get it working I had to add following rule in addition to those from "ausearch":
allow node_exporter_t self:tcp_socket create_stream_socket_perms;
Whole .te file:
policy_module(node_exporter, 1.0.0)
type node_exporter_t;
type node_exporter_exec_t;
init_daemon_domain(node_exporter_t, node_exporter_exec_t)
type node_exporter_unit_t;
systemd_unit_file(node_exporter_unit_t)
allow node_exporter_t self:tcp_socket { accept bind create getattr listen setopt };
### Not working without this rule:
allow node_exporter_t self:tcp_socket create_stream_socket_perms;
###
corenet_tcp_bind_generic_node(node_exporter_t)
corenet_tcp_bind_hplip_port(node_exporter_t)
dev_list_sysfs(node_exporter_t)
dev_read_sysfs(node_exporter_t)
fs_getattr_tmpfs(node_exporter_t)
fs_getattr_xattr_fs(node_exporter_t)
init_read_state(node_exporter_t)
kernel_read_fs_sysctls(node_exporter_t)
kernel_read_net_sysctls(node_exporter_t)
kernel_read_network_state(node_exporter_t)
kernel_read_rpc_sysctls(node_exporter_t)
kernel_read_software_raid_state(node_exporter_t)
kernel_read_system_state(node_exporter_t)
kernel_search_network_sysctl(node_exporter_t)
I haven't found an explanation why I need this additional rule yet. Does someone know whats going on here?
Actually I would assume that these particular rules should be enough...
corenet_tcp_bind_generic_node(node_exporter_t)
corenet_tcp_bind_hplip_port(node_exporter_t) **<- port 9100 is labled as hplip**
allow node_exporter_t self:tcp_socket { accept bind create getattr listen setopt };
OK it took me a long time to come across the "dontaudit" feature of selinux.
So in order to get all necessary permissions you can follow this procedure:
# Disable "dontaudit"
semodule -DB
#
# Do your stuff that creates permission violations
#
# Grab needed rules
ausearch -m avc --raw | audit2allow -R
#enable "dontaudit" back again
semodule -B

Start, Stop, Enable, Disable a Systemd Service from C++ [duplicate]

I have a .service for a process that i don't want to start at boot-time, but to call it somehow from another already running application, at a given time.
The other option would be to put a D-Bus (i'm using glib dbus in my apps ) service file in /usr/share/dbus-1/services and somehow call it from my application. Also, i don't manage to do this either.
Let's say that my dbus service file from /usr/share/dbus-1/services is com.callThis.service
and my main service file from /lib/systemd/system is com.startThis.service
If i run a simple introspect from command line:
/home/root # dbus-send --session --type=method_call --print-reply \
--dest=com.callThis /com/callThis org.freedesktop.DBus.Introspectable.Introspect
the D-Bus service file will get called and it will start what is in the Exec ( com.starThis ). The problem is that i want to achieve this from C/C++ code using D-Bus glib.
A combination of g_dbus_connection_send_message with g_dbus_message_new_method_call or g_dbus_message_new_signal should be what you are looking for.
I had trouble to do the same thing. The discover of : G_BUS_NAME_WATCHER_FLAGS_AUTO_START solve it.
g_bus_watch_name(G_BUS_TYPE_SYSTEM, "com.mydbus.listen",
G_BUS_NAME_WATCHER_FLAGS_AUTO_START, xOnNameAppeared, xOnNameVanished,
this, nullptr);

Swashbuckle (non-Core) configuration modification at runtime

Using the non-Core version of Swashbuckle (https://github.com/domaindrivendev/Swashbuckle), is there a way to modify the configuration after the application has launched? I cannot find a way to do this out of the box.
As an example, let's say I want to modify this at some point while the application is running:
.EnableSwaggerUi(c =>
{
c.SupportedSubmitMethods("GET");
});
Is this possible without modifying Swashbuckle itself?
Look into IDocumentFilters they get executed at runtime.
I have a few examples here:
SwashbuckleTest/blob/master/Swagger_Test/App_Start/SwaggerConfig.cs
And that EnableSwaggerUi( c => SupportedSubmitMethods ) is something that happens on the browser client side, you can change that behavior with a custom JS file, look in the config for InjectJavaScript.
You can also overwrite the default assets used in the swagger-ui (such as the index.html) with your own version, look in the config for CustomAsset

Determining dev vs production

What method should I use to determine if I'm on the dev system vs. production?
In this post from Ray Camden, he shows how to see what folder you're in, so that could be an indicator.
While in dev, I want to have error trapping turned off, missing template turned off, debug="yes" for cfstoredproc and cfquery, as well as always reload the components onRequestStart.
I have two approaches to this, both of which have served well. I'll start with the easiest approach first, which is what I'd call a "static". I use this when I don't have many environment-specific settings... maybe a small handful.
I'm assuming you have an Application.cfc or .cfm file for your app. In there, you could set a variable, something like "application.environment", and by default it'd be set to "dev". Throughout your app you could inspect that variable to determine where you are.
When you package your application for deployment, you could then change that Application.cfc file to read "" instead.
Now, that's going to get annoying, so I just use ant for this. I just use something like this in my build.xml, which lives in the same directory as Application.cfc:
<replace file="Application.cfc" token="DEV" value="PROD" casesensitive="true" />
And then zip the app for deployment:
<zip destfile="${zipdir}/MyApp-Production.zip">
<zipfileset dir="." prefix="MyApp" />
</zip>
Then I deploy the zip. If I'm working on a small project that uses FTP instead of some corporate enterprisey deployment hooey, then I'll just have an ANT task that FTPs files to my production server and it'll also perform that replace on Application.cfc and push that file, too.
For most of the apps I work on where I work, we use two database tables to manage environments. We do this because we have a lot of different environments, and each one has different settings, usually centered around filesystem and network paths that differ per environment (let's not talk about why they're different... totally separate discussion). So We have a table we call "AppLocations":
LocationID | LocName | LocDesc | Setting1 | Setting2 | Setting 3| ......
1 | Local | 'Localhost Environment' | whatever.....
2 | Dev | 'Development Environment' | whatever....
3 | Test | 'Test Environment' | whatever.....
and so on.
Then, we have another table named "AppLocationHosts"
LocationID | LocHostName
1 | 'localhost'
2 | 'devservername'
2 | 'otherdevservername'
3 | 'testservername'
3 | 'othertestserver'
and so on.
then, in Application.cfc, in onApplicationStart, we do this query
SELECT TOP 1 *
FROM AppLocations
WHERE LocationID IN (SELECT LocationID FROM AppLocationHosts WHERE LocHostName = <cfqueryparam value="#CGI.HTTP_HOST#" cfsqltype="cf_sql_varchar"/>)
And from there, once we know what location we're in based on the http_host match, we set those "Setting" columns into the application scope:
<cfloop list="#qryAppPathLocations.ColumnList#" index="ColName">
<cfset application[ColName] = qryAppPathLocations[ColName]>
</cfloop>
This approach isn't for everyone, but in our weird environment where consistency is unusual, it's been a very flexible approach.
Now, if you literally only have two environments, and one of them is "localhost" and the other is "www.myapp.com", then by far the easiest would be to just do a check on http_host in onApplicationStart and if you're in "www.myapp.com", then you do your production-specific setup. Perhaps here you set stuff like "request.querydebug = true" and then when you're in production, you turn that off. Then your queries could use that flag to determine whether to turn debug on or off for the cfstoredproc and query. Though I must say, I strongly recommend against that.
Can you just enable debugging in CFAdmin on your Dev box for your IP then use IsDebugMode()?
Dump the #server# scope and you'll see some keys that may help - eg the license mode of ColdFusion.
The solution we use is to set the IP of the current instance, and check it against our known "dev" IPs. Simple, easy, works.
A lot of good answers here - I'd like to mention using cgi.server_name , which can be combined with using a custom DNS to specify your dev environment. To get the localhost working, for IIS on Windows, set up hosts file like e.g. this:
C:\Windows\System32\drivers\etc\hosts - add entry:
127.0.0.1 myapp.dev.mydomain.com.au
Then in IIS map your server to this DNS.
Your systest and uat servers might be set up properly in your corp's DNS, such as
myapp.systest.mydomain.com.au - systest
myapp.uat.mydomain.com.au - uat
myapp.mydomain.com.au - production
Then, in my application.cfc I have a getEnvironment() that is called on every load for ease of use:
// get the environment based on cgi variables - top of application.cfc
this.stConfig = THIS.getEnvironment();
//... onApplicationStart
if (!stConfig.validEnvironment) {
writeOutput("Environment #cgi.server_name# not recognised");
return false;
}
// ...
public struct function getEnvironment () {
stConfig=structnew();
stConfig.validEnvironment = 1;
switch (cgi.server_name) {
// my dev environment
case "myapp.dev.mydomain.com.au": {
stConfig.env = "dev";
// +++
}
// my dev environment
case "myapp.systest.mydomain.com.au": {
stConfig.env = "systest";
// +++
}
// etc
}
return stConfig;
}
I will also copy stConfig to the request scope.
Now, I've got a lot of other stuff there too, and there's lots of ways to implement the storage of environments, e.g. but basically I find the combination of DNS and cgi.server_name particularly well suited to managing environments.
Fwiw, I will include ini files in application.cfc based on the environment name that I use for storing environment specific configurations. I find the getProfileSections() very useful for this, as the config files are very easy to work with. I have one common file that is shared between all environments, and then environment specific ones for those settings that need to be tailored to each environment.
Is it possible to get the directory of the currently running application?
Consider this directory structure for the different "instances" of your application:
/home/deploy/DevLevel.0/MyApp
Production Version
/home/deploy/DevLevel.1/MyApp
Preview or Staging Version
/home/deploy/DevLevel.2/MyApp
Development Version
If you can read the path to the current application, it's easy to find the integer after DevLevel. With that in hand (set as a global variable/constant), use it to change settings or behavior at runtime:
DevLevel == 0 means "Production"
DevLevel >= 1 means "Development"
For example, in the credit card authorization code:
if(DevLevel > 0)
enable_test_mode();
In error handling code:
if(DevLevel == 0)
send_error_to_log();
else
print_error();
Conclusion
The primary benefit here is that the code between the versions can remain 100% identical . No more "forgetting to enable this or disable that when moving code live".
Can this be implemented in ColdFusion?