Multiple ELMAH Databases - elmah

We use ELMAH for logging, and what we would ideally like to do is customize our ELMAH connection strings so that each environment (dev, uat, qa, production) goes to a separate ELMAH log. (There are compelling reasons to do this which have to do with our business process and the nature of our applications.)
I've been unable to determine, by scouring the documentation and googling madly, whether or not the database used by ELMAH must be named "ELMAH", or if I can customize it to be "dev", "qa", "uat", and "prod" (appropriate to the environment we're deploying to).
Can someone please clarify this for me?

When you write database, you don't really say anything about which database your are using.
That being set, ELMAH logging to databases are based on connection strings in web.config. This means that you can have individual connection strings for each environment. You can "swap" these in by using web.config transformations.

Related

Adding users via flyway DB migration

I've read in some articles that it's best practice NOT to add DB users via flyway db migration. It's not very clear to me as to why it's not a good practice. One thing we thought about is that it might be good to have the user configuration automatically documented in the code.
One article mentioned that you might want different user configuration for different environments. But you could also control that in flyway.
When/why would you not want to add DB users using flyway DB migration?
If I'm deploying a new user for the database that will be common across all environments, I would absolutely make the creation of that user a part of the Flyway deployment scripts. It fundamentally makes sense. "Version 43.43 is where we added the login snarglegrass to the app."
On the other hand, if you are working on setting up different environments with varying permissions, I probably will make that part of the flow control commands in pre/post deployment scripts instead of using Flyway. The reason for this is because it can be challenging to write the scripts in such a way as they're repeatable and safe. You could still do it that way though.

Manage sqlite database with git

I have this small project that specifies sqlite as the database choice.
For this particular project, the framework is Django, and the server is hosted by Heroku. In order for the database to work, it must be set up with migration commands and credentials whenever the project is deployed to continuous integration tools or development site.
The question is, that many of these environments do not actually use the my_project.sqlite3 file that comes with the source repository, which we version control with git. How do I incorporate changes to the deployed database? Is a script that set up the database suitable for this scenario? Meanwhile, it is worth notice that there are security credentials that should not appear in a script in unencrypted ways, which makes the situation tricky.
that many of these environments do not actually use the my_project.sqlite3 file that comes with the source repository
If your deployment platform does not support your chosen database, then your development environment should probably be moved to using one of the databases they do support. It is possible to run different databases in development and production, but just seems like the source of headaches.
I have found a number of articles that state that Heroku just doesn't support SQLite in production and instead recommends Postgres.
How do I incorporate changes to the deployed database? Is a script that set up the database suitable for this scenario?
I assume that you are just extracting data from one database to give to another, so yes,as long as that script is a one time batch operation each time the code is updated, then it should be fine. You will want something else if you are adding/manipulating data in production and then exporting it to your git.
Meanwhile, it is worth notice that there are security credentials that should not appear in a script in unencrypted ways
An environment variable should solve that. You set your host machine to have environment variables with your credentials and then just retrieve them within the script. You are looking to have something like this:
# Set environment vars
os.environ['USER'] = 'username'
os.environ['PASSWORD'] = 'password'
# Get environment vars
USER = os.getenv('USER')
PASSWORD = os.environ.get('PASSWORD')

Deploying Web job with appropriate environments variable

We are trying to deploy a web job via octopus. We have different eventhub keys saved in the variables and we expect the webjob to pick up the right key depending on the environment that it is being deployed to. Has one one done this before? Any advice on settings up configurations in octopus?
<========== UPDATE ===========>
We were being careless and didn't quite set our octopus process to transform the Configuration Variables. You should be able to do so by clicking 'configure variables' in the process step.
I don't think it being deployed via Octopus is all that relevant here. Generally, a .NET WebJob is able to access Azure App Setting using standard configuration API.
If that is not working for you, please update your question to clarify what you tries, and specifically what didn't work.

wagtail cms content deploy to production

I am study on the popular django cms framework - wagtail and coming to question: how do you deploy your developed contents - like pages/documents/images to production environments?
I am puzzled because these contents(like page) are saved into database, essentially they are just database tables rows but not a resource in git repo, so if I develope a simple web site in my dev and when I come to deploy to prod, it's not as simple as a git push. what is the best practice on this?
I read some codes from torchbox, there are some database dump and records pulling tasks using fabaric, not sure if that's the preferred way and neither can fully understand them.
Or if it's production site, is it supposed that everyone add content there and prod is the source of truth, there won't need of "content deployment" as all but only those schema changes via souths migration or other static resources only.
Please help if anyone has got experience on this and provide guidance.
Thanks
On our (Torchbox) sites, all content entry usually happens on the production site, so we don't need to push any database content as part of our regular deployments. Many of our sites have tens or even hundreds of editors, so it would be almost impossible to synchronise the content across multiple installations of the site.
Whenever we need to transfer content from one installation to another (for example, deploying the production site for the first time, or pulling a snapshot of the live site to help with development), we use the Postgresql pg_dump command to make a SQL dump of the complete database, then restore it at the destination using the psql command. Tools like Fabric can be used to automate this, but this isn't essential.

Monitoring the Beanstalk production environment for errors(code level)

We have our production site in Elastic Beanstalk. SNS notifications is good feature to keep us updated about the environment status whenever it changes. But, we want to watch the production environment logs closely.
Our project is a java webapplication, we want to check the status of the production environment from other beanstalk environments i.e., beta and staging environment which are also in the same region and within the same application.
Our goals are to
use aws sdk or other aws tools to get the production beanstalk tomcat logs and display in our beta site on some page.
Run some tool periodically from the Beta environment on Live environment. Which basically does the testing of the sites, i.e., whether all code level mappings are good, if any exceptions then email them.
if we break down the point2 into further more -
We have quartz scheduler to schedule a job at a particular time. We are planning to add some script which test the complete environment periodically. Are there any Beanstalk built in tools that tests the complete site, accessing all URLS and testing the DB to java serialize object classes mappings (hibernate mappings) etc.,
We do use S3 elastic beanstalk bucket to check tomcat logs, but would like to implement the step1 & step2 if possible.
--
Thanks
For Item #1:
I don't recommend using beta and dev to watch production. Instead, here's what I'd do:
Setup Pingdom on all the three environments, so we could have a close eye on uptime
Review the Logging Code. Do you have a explicitly pattern/idiom for exception handling in place? Are your logging functioning?
Setup Papertrail with Logback. Why? You'll have realtime aggregate log tailing for each and every machine you setup a syslog receiver for. For beanstalk-maven-plugin, we are about to release an archetype (see an example 'blank' project created out of it). Even if you're not using, its worth it so see how to use it.
Setup Log Rollout to S3. As it is, the usage is quite useless. I suggest you work into something to import for analysis (or better yet: Export for usage from Hive - Which is something Papertrail Does)
Define your Health Check Code accordingly. Think about what could go wrong, in terms of dependencies
Look / Set up some CloudWatch metrics. If you application is heavy and you're on a t1.micro, which conditions it would spike? Use that at your advantage
That are just a few ideas.
w/r/t Item #2:
I suggest you rethink your structure. I actually dislike the idea of using crontab in elastic beanstalk servers, since its error prone (leader_only? Managing output?). Instead, I use my newer favourite crontab webapp - Jenkins, and set up an integration testing / smoke testing artifact, with only the relevant bits to remotely test the instance. Selenium might help, but I guess if your Services are critical, you might be more happy relying in rest-assured, for instance.
Hope it helps.