Is Nlog not archiving all files? - console-application

I'm logging one file per day, and I want to move the old ones to a subfolder every month. I'm feeling there is something off here.
These are my current settings. For testing purposes, I changed it to archive every minute:
<target name="file" xsi:type="File"
layout="${longdate} ${logger} ${message}${exception:format=ToString}"
fileName="${basedir}/LOG/${date:format=yyyy-MM-dd}.log"
archiveFileName="${basedir}/LOG/OLD_LOGS/{#}.log"
archiveDateFormat="yyyy-MM-dd"
archiveNumbering="Date"
archiveEvery="Minute"
/>
Consider the actual date for this test: May 29th 2020.
If I run my app now, the file 2020-05-29.log will be created in the LOG folder.
.
├── ...
├── LOG
│ └── 2020-05-29.log
└── ...
If I run my app one minute later, the file above will be archived to the newly created subfolder OLD_LOGS and a new file will be created in the LOG folder.
.
├── ...
├── LOG
│ ├── 2020-05-29.log
│ ├── OLD_LOGS
│ └── 2020-05-29.log
└── ...
If I change my clock to the next day (May 30th 2020) a new file will be created 2020-05-30.log
.
├── ...
├── LOG
│ ├── 2020-05-29.log
│ ├── 2020-05-30.log
│ ├── OLD_LOGS
│ └── 2020-05-29.log
└── ...
If I run my app one minute later, the 2020-05-30.log file will be archived
.
├── ...
├── LOG
│ ├── 2020-05-29.log
│ ├── 2020-05-30.log
│ ├── OLD_LOGS
│ ├── 2020-05-29.log
│ └── 2020-05-30.log
└── ...
Shouldn't the LOG/2020-05-29.log file have been archived?

There is a problem in your configuration for per minute archival.
NLog starts overriding the old file because your file format is yyyy-MM-DD.
The correct configuration for per minute achieving, add below code.
<target name="kFile" xsi:type="File"
layout="${longdate} ${logger} ${message}${exception:format=ToString}"
fileName="${basedir}/LOG/${date:format=yyyy-MM-dd}.log"
archiveFileName="${basedir}/LOG/OLD_LOGS/{#}.log"
archiveDateFormat="yyyy-MM-dd-mm"
archiveNumbering="Date"
archiveEvery="Minute"
/>
Changed format to yyyy-MM-dd-mm.
This change created below files in my folder
.
├── ...
├── LOG
│ ├── 2020-05-30.log
│ ├── OLD_LOGS
│ ├── 2020-05-30-01.log
│ └── 2020-05-30-02.log
└── ...
I checked with the same configuration as yours
and the result was
So the NLog archiving feature does not work the way you want it to. It just places the last file of the month to the archive folder not all.
So, you have to configure NLog to archive files every day and set max file number to something like 365 to keep them for one year.
Apply below configuration:
<target name="kFile" xsi:type="File"
layout="${longdate} ${logger} ${message}${exception:format=ToString}"
fileName="${basedir}/LOG/${date:format=yyyy-MM-dd}.log"
archiveFileName="${basedir}/LOG/OLD_LOGS/{#}.log"
archiveDateFormat="yyyy-MM-dd"
archiveNumbering="Date"
archiveEvery="Day"
maxArchiveFiles="365"
/>
This way you will get one file per day and the file will be transferred to the archive folder each day. The archive will keep the file for one year. You may want to review the max archive file though. Because it may increase the size of the folder based on how much you log.

Related

How to put projects in one folder in a Visual Studio solution

I did some problem-solving in C++, and the current file structure looks like this.
solution_folder/
├── question_1/
│ ├── Main.cpp
│ ├── question_1.vcxproj
│ └── question_1.vcxproj.filters
├── question_2/...
├── (more project folders)
└── solution.sln
I want to put all projects in one folder and still be able to open the solution in Visual Studio just as I used to before. I don't want to scroll through hundreds of folders to get to the readme section when I upload the solution on GitHub.
solution_folder/
├── src/
│ ├── question_1/
│ │ ├── Main.cpp
│ │ ├── question_1.vcxproj
│ │ └── question_1.vcxproj.filters
│ ├── question_2/...
│ └── (more project folders)
└── solution.sln
Is this possible? Should I not create a project for each question?
As #drescherjm and #heapunderrun commented,
Put all project folders in one folder in File Explorer,
Remove all projects from the solution in Solution Explorer,
Right-click on the solution and Add → Existing Project all projects.
Update:
Shortcuts
Remove: Delete
Add existing project: Alt F D E

How to generate coverage for multiple packages using go test in custom folders?

We have following project structure:
├── Makefile
├── ...
├── src
│   ├── app
│   │   ├── main.go
│ │ ├── models
│ │ ├── ...
│ │ └── dao.go
│   │   ├── ...
│   │   └── controllers
│ │ ├── ...
│ │ └── pingController.go
│   └── test
│   ├── all_test.go
│   ├── ...
│   └── controllers_test.go
└── vendor
└── src
├── github.com
├── golang.org
└── gopkg.in
I want to measure coverage of packages in src/app by tests in src/test. And currently generating coverage profile by running custom script that runs coverage for each package in app and then merges all coverage profiles into one file. Recently I heard that in go1.10 we are able to generate coverage for multiple packages.
So I tried to replace that script with oneliner, and tried running
GOPATH=${PROJECT_DIR}:${PROJECT_DIR}/vendor go test -covermode count -coverprofile cover.out -coverpkg all ./src/test/...
It gives me "ok test 0.475s coverage: 0.0% of statements in all"
When I do
cd src/test/
GOPATH=${PROJECT_DIR}:${PROJECT_DIR}/vendor go test -covermode count -coverprofile cover.out -coverpkg all
Logs show that specs are runned and tests are successfull, but still I have "coverage: 0.0% of statements in all" and empty cover.out.
What am I missing to properly compute coverage of packages in app by tests in test?
You can't with the current state of go test but you can always use third party scripts.
https://github.com/grosser/go-testcov
Short answer:
go test -race -coverprofile=coverage.txt -covermode=atomic ./... # Run all the tests with the race detector enabled
go test -bench=. -benchmem ./... # Run all the benchmark for 3s and print memory information
In order to create a test for a Go code, you have to create a file (in the same folder of the root code) that have the same name of the code, and append "_test" to the name. The package have to be the same too.
So, if I have a GO Code called strings.go, the relative test suite have to be named: strings_test.go.
After that, you have to create a method that have in input the t *testing.T
struct, and the name of the method have to start with Test or Benchmark word.
So, if the strings.go contains a method called "IsUpper", the related test-case is a method called TestIsUpper(t *testing.T).
If you need the Benchmark, than you need to substitute the Test word with Benchmark, so the name of the method will be BenchmarkIsUpper, and the struct that the method take in input is b *testing.B.
You can have a look at the following link in order to see the tree-structure necessary for execute the test in GO: https://github.com/alessiosavi/GoGPUtils.
There you can find Benchmark and TestCase.
Here an example of tree-struct
├── string
│ ├── stringutils.go
│ └── stringutils_test.go

Terraform environments - how to get it DRY

We are utilizing Terraform heavily for AWS Cloud provisioning. Our base terraform structure looks like this:
├─ modules
├── x
├── y
├─ environments
├── dev
│ ├── main.tf
│ ├── output.tf
│ └── variables.tf
└── uat
│ ├── main.tf
│ ├── output.tf
│ └── variables.tf
└── prod
├── main.tf
├── output.tf
└── variables.tf
As we reached a point where we have many modules and many environments, code duplication becomes a more serious headache now, we would like to get rid of as much of it as possible.
Our main concern currently is with the output.tf files - every time we extend an existing module or add a new module, we need to set up the environment specific configuration for it (this is expected), but we still have to copy/paste the required parts into output.tf to output the results of the provisioning (like IP addresses, AWS ARNs, etc.).
Is there a way to get rid of the duplicated output.tf files? Could we just define the wanted outputs in the modules themselves and see all defined outputs whenever we run terraform for a specific environment?
We built and open sourced Terragrunt to solve this very issue. One of Terragrunt's features is the ability to download remote Terraform configurations. The idea is that you define the Terraform code for your infrastructure just once, in a single repo, called, for example, modules:
└── modules
├── app
│ └── main.tf
├── mysql
│ └── main.tf
└── vpc
└── main.tf
This repo contains typical Terraform code, with one difference: anything in your code that should be different between environments should be exposed as an input variable. For example, the app module might expose the following variables:
variable "instance_count" {
description = "How many servers to run"
}
variable "instance_type" {
description = "What kind of servers to run (e.g. t2.large)"
}
In a separate repo, called, for example, live, you define the code for all of your environments, which now consists of just one .tfvars file per component (e.g. app/terraform.tfvars, mysql/terraform.tfvars, etc). This gives you the following file layout:
└── live
├── prod
│ ├── app
│ │ └── terraform.tfvars
│ ├── mysql
│ │ └── terraform.tfvars
│ └── vpc
│ └── terraform.tfvars
├── qa
│ ├── app
│ │ └── terraform.tfvars
│ ├── mysql
│ │ └── terraform.tfvars
│ └── vpc
│ └── terraform.tfvars
└── stage
├── app
│ └── terraform.tfvars
├── mysql
│ └── terraform.tfvars
└── vpc
└── terraform.tfvars
Notice how there are no Terraform configurations (.tf files) in any of the folders. Instead, each .tfvars file specifies a terraform { ... } block that specifies from where to download the Terraform code, as well as the environment-specific values for the input variables in that Terraform code. For example, stage/app/terraform.tfvars may look like this:
terragrunt = {
terraform {
source = "git::git#github.com:foo/modules.git//app?ref=v0.0.3"
}
}
instance_count = 3
instance_type = "t2.micro"
And prod/app/terraform.tfvars may look like this:
terragrunt = {
terraform {
source = "git::git#github.com:foo/modules.git//app?ref=v0.0.1"
}
}
instance_count = 10
instance_type = "m2.large"
See the Terragrunt documentation for more info.
One way to resolve this is to create a base environment, and then symlink the common elements, for example:
├─ modules
├── x
├── y
├─ environments
├── base
│ ├── output.tf
│ └── variables.tf
├── dev
│ ├── main.tf
│ ├── output.tf -> ../base/output.tf
│ └── variables.tf -> ../base/variables.tf
├── uat
│ ├── main.tf
│ ├── output.tf -> ../base/output.tf
│ └── variables.tf -> ../base/variables.tf
├── super_custom
│ ├── main.tf
│ ├── output.tf # not symlinked
│ └── variables.tf # not symlinked
└── prod
├── main.tf
├── output.tf -> ../base/output.tf
└── variables.tf -> ../base/variables.tf
This approach only really works if your output.tf and variables.tf files are the same for each environment, and although you can have non-symlinked variants (e.g. super_custom above), this can become confusing as it's not immediately obvious which environments are custom and which aren't. YMMV. I try to keep the changes between environments limited to a .tfvars file per environment.
It's worth reading Charity Major's excellent post on tfstate files, which set me on this path.
If your dev, uat and prod environments have the same shape, but different properties you could leverage workspaces to separate your environment state, along with separate *.tfvars files to specify the different configurations.
This could look like:
├─ modules
│ ├── x
│ └── y
├── dev.tfvars
├── prod.tfvars
├── uat.tfvars
├── main.tf
├── outputs.tf
└── variables.tf
You can create a new workspace with:
terraform workspace new uat
Then deploying changes becomes:
terraform workspace select uat
terraform apply --var-file=uat.tfvars
The workspaces feature ensures that different environments states are managed separately, which is a bonus.
This approach only works when the differences between the environments are small enough that it makes sense to encapsulate the logic for that in the individual modules (for example, having a high_availability flag which adds some additional redundant infrastructure for uat and prod).

How to organize test resource files when using waf

I am using Google Tests in a project with waf as a build system. I want to know an effective way of dealing with resource files.
For a directory structure like the following:
MyProject
├── build
├── src
| ├──do_something.cpp
| ├──do_something.h
├── test
| ├── unit_test.cpp
│ ├── resources
│ │ ├── input1.txt
│ │ ├── input2.txt
├── wscript
After building, to run the tests from the build directory, I would need the resource files to be copied across. The build directory would look like:
MyProject
├── build
| ├── test
│ │ ├── resources
│ │ | ├── input1.txt
│ │ | ├── input2.txt
│ │ ├── unit_test
To achieve this, my current wscript is:
def options(opt):
opt.load('compiler_cxx')
def configure(conf):
conf.load('compiler_cxx')
def build(bld):
bld.stlib(source='src/do_something.cpp',
target='mylib',
includes='src')
bld.exec_command("cp -r test/resources build/test")
bld.program(source='test/unit_test.cpp',
includes='src',
target='test/unit_test',
use='mylib')
Using the bld.exec_command like this seems hacky. What is a better way? How are other people organizing their test resources with waf?
I am using waf 1.9.5.
The easiest way is of course to change your program so that it can read resources from an arbitrary location. Why litter the file system with multiple copies of the same files?
That said, you can easily copy a directory tree recursively using:
for f in ctx.path.ant_glob('tests/resources/**/*'):
ctx(features = 'subst', source = f.srcpath(),
target = f.srcpath())

Use cases for new Django 1.4 project structure?

I guess this is sort of a followup question to Where should i create django apps in django 1.4? The final answer there seemed to be "nobody knows why Django changed the project structure" -- which seems a bit unsatisfactory.
We are starting up a new Django project, and currently we are following the basic structure outlined at http://www.deploydjango.com/django_project_structure/index.html:
├── project
│ ├── apps
│ │ ├── app1
│ │ └── app2
│ ├── libs
│ │ ├── lib1
│ │ └── lib2
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── manage.py
But I think we also are anticipating a multi-developer environment comprising largely independent applications with common project-level components, so it seems cleaner to me to separate out the project and app paths.
├── project
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── apps
│ ├── app1
│ └── app2
├── libs
│ ├── lib1
│ └── lib2
└── manage.py
It's hard to come up with any specific, non-stylistic rationale for this, though. (I've mostly worked only with single-app projects before now, so I may be missing something here.)
Mainly, I'm motivated by the fact that Django 1.4 seems to be moving in the latter direction. I assume there is some rationale or expected use case that motivated this change, but I've only seen speculation on what it might be.
Questions:
What was the motivation for the 1.4 project structure change?
Are there use cases where having apps inside/outside of the project makes a non-trivial impact?
It's much easier to extract an app from a project because there are no more imports like this:
from projectname.appname.models import MyModel
instead you import them the same way you would import apps which are installed via a python package
if you use i18n then this could make an impact because makemessages searches for translation strings in the current directory. a good way to translate apps and the project using a single .po file is to create the locale folder outside the project dir
├── project
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── app1
├── app2
├── locale
│ ├── en
│ └── de
└── manage.py
I marked the earlier response as an answer, but I ran into this blog post off the IRC archives that seems to have some additional info.
http://blog.tinbrain.net/blog/2012/mar/15/django-vs-pythonpath/
As I understand it, the gist is:
When you're developing, manage.py implicitly sets up PYTHONPATH to see the project-level code, with the result that import myapp works for an app defined inside the project.
When you deploy, you generally don't run manage.py, so you would have to say import myproject.myapp, thus things break on deployment if you don't know about this.
The "standard" fix is to add the project to PYTHONPATH, but this results in double-imports (myapp and myproject.myapp), which can generate weird behavior on things like signals.
So the 1.4 project structure seems mainly intended to eliminate the possibility of devs relying on an odd effect of manage.py.