I can't find the line in httpd.conf to write "Allow from all" to access wamp from my iPhone.
I try to write "Allow from all" about everywhere but nothing work, I need help because I can't find anything concerning wampserver3 and I'm loosing time so much.
Thank you guys !
The solution on wampserver3 is effectively different, you have to add "Require ip 192.168.0.0/24" on line 285 of httpd.conf, just under Require local:
284: # onlineoffline tag - don't remove
285: Require local
286: Require ip 192.168.0.0/24
287: </Directory>
Related
I managed WordPress Site but the editors and contributors do get a lot of false-positive flags by ModSecurity which affected their publishing experience. so I intend to just whitelist the country where they are publishing from.
My problem is that I am not that good with ModSecurity rule configuration so am not sure if what I have prepared is correct.
Secondly, How do I get the legacy version of GeoLiteCity.dat?
Below is what I have put together
# Allow NG Country
SecGeoLookupDb /usr/local/geo/data/GeoLiteCity.dat
SecRule REMOTE_ADDR "#geoLookup" "chain,phase:2,id:1100234,allow,msg:'Custom WAF Rules: Allow NG Country'"
SecRule GEO:COUNTRY_CODE "#streq NG"
With this rule, you allow clients from country with NG country code - but you newer denied the other countries.
You should try this:
SecGeoLookupDb /usr/local/geo/data/GeoLiteCity.dat
SecRule REMOTE_ADDR "#geoLookup" "id:1100234,phase:1,drop,msg:'Custom WAF Rules: Allow NG Country',chain"
SecRule GEO:COUNTRY_CODE "!#streq GB"
As you can see, the main difference between this any your rule is that the disruptive action is not allow, but drop.
Please note, that you can use this rule in phase:1.
Secondly, How do I get the legacy version of GeoLiteCity.dat?
I'm not familiar with license, but this is what I found:
https://mailfud.org/geoip-legacy/
Edit: you can find more information about GEO variable here.
I'm trying to overwrite a rule as per documentation, like this
https://documentation.wazuh.com/3.12/learning-wazuh/replace-stock-rule.html
So I've copied one rule to local_rules.xml, created my own group (prior to that also tried to put it within the rule's original group tag), but it seems to be completely ignoring it:
This is what I've put in local_rules.xml:
<group name="istvan">
<rule frequency="8" id="31533" level="9" overwrite="yes" timeframe="20">
<if_matched_sid>31530</if_matched_sid>
<same_source_ip/>
<description>High amount of POST requests in a small period of time (likely bot).</description>
<group>pci_dss_6.5,pci_dss_11.4,gdpr_IV_35.7.d,nist_800_53_SA.11,nist_800_53_SI.4,</group>
</rule>
</group>
I've only changed the level to 9 and added the overwrite="yes" tag. The idea is that it doesn't send me this alerts (as my treshold is set to level 10+), save, restart, but it's completely ignoring it, and I'm stil getting those alerts with level 10 tag.
Frankly, I'm starting to be clueless why is it happening.
Any ideas?
Thanks.
A good way to test the expected behaviour would be using /var/ossec/bin/ossec-logtest as mentioned in that doc.
To elaborate i will take the example of that doc :
I will overwrite the rule 5716 : https://github.com/wazuh/wazuh-ruleset/blob/317052199f751e5ea936730710b71b27fdfe2914/rules/0095-sshd_rules.xml#L121, as below :
[root#localhost vagrant]# egrep -iE "ssh" /var/ossec/etc/rules/local_rules.xml -B 4 -A 3
<rule id="5716" overwrite="yes" level="9">
<if_sid>5700</if_sid>
<match>^Failed|^error: PAM: Authentication</match>
<description>sshd: authentication failed.</description>
<group>authentication_failed,pci_dss_10.2.4,pci_dss_10.2.5,gpg13_7.1,gdpr_IV_35.7.d,gdpr_IV_32.2,hipaa_164.312.b,nist_800_53_AU.14,nist_800_53_AC.7,</group>
</rule>
The logs can be tested without having to restart the Wazuh manager, Opening /var/ossec/bin/ossec-logtest then pasting my log :
2020/05/26 09:03:00 ossec-testrule: INFO: Started (pid: 9849).
ossec-testrule: Type one log per line.
Oct 23 17:27:17 agent sshd[8221]: Failed password for root from ::1 port 60164 ssh2
**Phase 1: Completed pre-decoding.
full event: 'Oct 23 17:27:17 agent sshd[8221]: Failed password for root from ::1 port 60164 ssh2'
timestamp: 'Oct 23 17:27:17'
hostname: 'agent'
program_name: 'sshd'
log: 'Failed password for root from ::1 port 60164 ssh2'
**Phase 2: Completed decoding.
decoder: 'sshd'
dstuser: 'root'
srcip: '::1'
srcport: '60164'
**Phase 3: Completed filtering (rules).
Rule id: '5716'
Level: '9'
Description: 'sshd: authentication failed.'
As expected the level has been overwriting which was initially 5. Although in your case, you will have to paste the log 8 times in timeframe lower than 20 s to be able to trigger that rule.
If you can share the logs triggering that alert, i can test with it.
On the other hand, you can create a sibling rule to simply ignore your rule 31533, something similar to below :
<rule id="100010" level="2">
<if_sid>31533</if_sid>
<description>Ignore rule 31533</description>
</rule>
Make sure to restart the Wazuh manager afterward to apply the change.
You can find more information about customizing rules/decoders here : https://wazuh.com/blog/creating-decoders-and-rules-from-scratch/
Hope this helps,
After finally talking to the developers, it turns out that it was indeed ignoring local_rules.xml. I had a strage exclusion of one rule (probably a problematic syntax, although it did't report an error)
"rule_exclude": [
"31151"
When I removed it, it started working as described in the user's guide.
I've just had a bit of fun trying to connect to a new VM I'd created, I've found loads of posts from people with the same problem, the answer details the points I've found
(1) For me it worked with
<VMName>\Username
Password
e.g.
Windows8VM\MyUserName
SomePassword#1
(2) Some people have just needed to use a leading '\', i.e.
\Username
Password
Your credentials did not work Azure VM
(3) You can now reset the username/password from the app portal. There are powershell scripts which will also allow you to do this but that shouldn't be necessary anymore.
(4) You can also try redeploying the VM, you can do this from the app portal
(5) This blog says that "Password cannot contain the username or part of username", but that must be out of date as I tried that once I got it working and it worked fine
https://blogs.msdn.microsoft.com/narahari/2011/08/29/your-credentials-did-not-work-error-when-connecting-to-windows-azure-vms/
(6) You may find links such as the below which mention Get-AzureVM, that seems to be for classic VMs, there seem to be equivalents for the resource manager VMs such as Get-AzureRMVM
https://blogs.msdn.microsoft.com/mast/2014/03/06/enable-rdp-or-reset-password-with-the-vm-agent/
For complete novices to powershell, if you do want to go down that road here's the basics you may need. In the end I don't believe I needed this, just point 1
unInstall-Module AzureRM
Install-Module AzureRM -allowclobber
Import-Module AzureRM
Login-AzureRmAccount (this will open a window which takes you through the usual logon process)
Add-AzureAccount (not sure why you need both, but I couldn’t log on without this)
Select-AzureSubscription -SubscriptionId <the guid for your subscription>
Set-AzureRmVMAccessExtension -ResourceGroupName "<your RG name>" -VMName "Windows8VM" -Name "myVMAccess" -Location "northeurope" -username <username> -password <password>
(7) You can connect to a VM in a scale set as by default the Load Balancer will have Nat Rules mapping from port onwards 50000, i.e. just remote desktop to the IP address:port. You can also do it from a VM that isn't in the scale set. Go to the scale set's overview, click on the "virtual network/subnet", that'll give you the internal IP address. Remote desktop from the other one
Ran into similar issues. It seems to need domain by default. Here is what worked for me:
localhost\username
Other option can be vmname\username
Some more guides to help:
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/quick-create-portal#connect-to-virtual-machine
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/connect-logon
In April 2022 "Password cannot contain the username or part of username" was the issue.
During the creation of VM in Azure, everything was alright but wasn't able to connect via RDP.
Same in Nov 2022, you will be allowed to create a password that contains the user name but during login it will display the credential error. Removing the user name from the password fixed it.
I set up countly analytics on the free tier AWS EC2, but stupidly did not set up an elastic IP with it. No, the traffic it too great that I can't even log into the analytics as the CPU is constantly running at 100%.
I am in the process of issuing app updates to change the analytics address to a private domain that forwards to the EC2 instance, so I can change the forwarding in future.
In the mean time, is it possible for me to set up a 2nd instance and forward all the traffic from the current one to the new one?
I found this http://lastzactionhero.wordpress.com/2012/10/26/remote-port-forwarding-from-ec2/ will this work from 1 EC2 instance to another?
Thanks
EDIT ---
Countly log
/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/server.js:529
throw err;
^ ReferenceError: liveApi is not defined
at processUserSession (/home/ubuntu/countlyinstall/countly/api/parts/data/usage.js:203:17)
at /home/ubuntu/countlyinstall/countly/api/parts/data/usage.js:32:13
at /home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/collection.js:1010:5
at Cursor.nextObject (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/cursor.js:653:5)
at commandHandler (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/cursor.js:635:14)
at null. (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/db.js:1709:18)
at g (events.js:175:14)
at EventEmitter.emit (events.js:106:17)
at Server.Base._callHandler (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/base.js:130:25)
at /home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/server.js:522:20
You can follow the steps described in the blog post to do the port forwarding. Just make sure not to forward it to localhost :)
Also about 100% CPU, it is probably caused by MongoDB. Did you have a chance to check the process? In case it is mongod, issue mongotop command to see the most time consuming collection accesses. We can go from there.
Yes. It is possible. I use ngnix with Node JS app. I wanted to redirect traffic from one instance to another. Instance was in different region and not configured in same VPC as mentioned in AWS documentation.
Step 1: Go to /etc/ngnix/site-enabled and open default.conf file. Your configuration might be on different file.
Step 2: Change proxy_pass to your chosen IP/domain/sub-domain
server
{
listen 80
server_name your_domain.com;
location / {
...
proxy_pass your_ip; // You can put domain, sub-domain with protocol (http/https)
}
}
Step 3: then restart the ngnix
sudo systemctl restart nginx
This can be possible for any external instances and different VPC instances.
I'm trying to put a set of EC2 instances behind a couple of Varnish servers. Our Varnish configuration very seldom changes (once or twice a year) but we are always adding/removing/replacing web backends for all kinds of reasons (updates, problems, load spikes). This creates problems because we always have to update our Varnish configuration, which has led to mistakes and heartbreak.
What I would like to do is manage the set of backend servers simply by adding or removing them from an Elastic Load Balancer. I've tried specifying the ELB endpoint as a backend, but I get this error:
Message from VCC-compiler:
Backend host "XXXXXXXXXXX-123456789.us-east-1.elb.amazonaws.com": resolves to multiple IPv4 addresses.
Only one address is allowed.
Please specify which exact address you want to use, we found these:
123.123.123.1
63.123.23.2
31.13.67.3
('input' Line 2 Pos 17)
.host = "XXXXXXXXXXX-123456789.us-east-1.elb.amazonaws.com";
The only consistent public interface ELB provides is its DNS name. The set of IP addresses this DNS name resolves to changes over time and with load.
In this case I would rather NOT specify one exact address - I would like to round-robin between whatever comes back from the DNS. Is this possible? Or could someone suggest another solution that would accomplish the same thing?
Thanks,
Sam
You could use a NGINX web server to deal with the CNAME resolution problem:
User-> Varnish -> NGNIX -> ELB -> EC2 Instances
(Cache Section) (Application Section)
You have a configuration example in this post: http://blog.domenech.org/2013/09/using-varnish-proxy-cache-with-amazon-web-services-elastic-load-balancer-elb.html
Juan
I wouldn't recommend putting an ELB behind Varnish.
The problem lies on the fact that Varnish is resolving the name
assigned to the ELB, and it’s caching the IP addresses until the VCL
get’s reloaded. Because of the dynamic nature of the ELB, the IPs
linked to the cname can change at any time, resulting in Varnish
routing traffic to an IP which is not linked to the correct ELB
anymore.
This is an interesting article you might like to read.
Yes, you can.
in your default.vcl put:
include "/etc/varnish/backends.vcl";
and set backend to:
set req.backend = default_director;
so, run this script to create backends.vcl:
#!/bin/bash
FILE_CURRENT_IPS='/tmp/elb_current_ips'
FILE_OLD_IPS='/tmp/elb_old_ips'
TMP_BACKEND_CONFIG='/tmp/tmp_backends.vcl'
BACKEND_CONFIG='/etc/varnish/backends.vcl'
ELB='XXXXXXXXXXXXXX.us-east-1.elb.amazonaws.com'
IPS=($(dig +short $ELB | sort))
if [ ! -f $FILE_OLD_IPS ]; then
touch $FILE_OLD_IPS
fi
echo ${IPS[#]} > $FILE_CURRENT_IPS
DIFF=`diff $FILE_CURRENT_IPS $FILE_OLD_IPS | wc -l`
cat /dev/null > $TMP_BACKEND_CONFIG
if [ $DIFF -gt 0 ]; then
COUNT=0
for i in ${IPS[#]}; do
let COUNT++
IP=$i
cat <<EOF >> $TMP_BACKEND_CONFIG
backend app_$COUNT {
.host = "$IP";
.port = "80";
.connect_timeout = 10s;
.first_byte_timeout = 35s;
.between_bytes_timeout = 5s;
}
EOF
done
COUNT=0
echo 'director default_director round-robin {' >> $TMP_BACKEND_CONFIG
for i in ${IPS[#]}; do
let COUNT++
cat <<EOF >> $TMP_BACKEND_CONFIG
{ .backend = app_$COUNT; }
EOF
done
echo '}' >> $TMP_BACKEND_CONFIG
echo 'NEW BACKENDS'
mv -f $TMP_BACKEND_CONFIG $BACKEND_CONFIG
fi
mv $FILE_CURRENT_IPS $FILE_OLD_IPS
I wrote this script to have a way to auto update the vcl once a new
instance comes up or down.
it requires that the .vcl has an include to backend.vcl
This script is just a part of the solution, the tasks should be:
1. get new servername and IP (auto scale) can use AWS API cmds to do that, also via bash
2. update vcl (this script)
3. reload varnish
The script is here
http://felipeferreira.net/?p=1358
Other pepole did it in different ways
http://blog.cloudreach.co.uk/2013/01/varnish-and-autoscaling-love-story.html
You don get to 10K petitions if had to resolve an ip on each one. Varnish resolve ips on start and do not refresh it unless its restarted o reloaded. Indeed varnish refuses to start if found two ip for a dns name in a backend definition, like the ip returned for multi-az ELBs.
So we solved a simmilar issue placing varnish in front of nginx. Nginx can define an ELB as a backend so Varnish backend is a local nginx an nginx backend is the ELB.
But I don't feel comfy with this solution.
You Could make the ELB in your private VPC so that it would have a local ip. This way you don't have to use any DNS kind of Cnames or anything which Varnish does not support as easily.
Using internal ELB does not help the problem, because it usually have 2 Internal IP's!
Backend host "internal-XXX.us-east-1.elb.amazonaws.com": resolves to multiple IPv4 addresses.
Only one address is allowed.
Please specify which exact address you want to use, we found these:
10.30.10.134
10.30.10.46
('input' Line 13 Pos 12)
What I am not sure is if this IPs will remain always the same or they can change? anyone?
I my previous answer (more than three years ago) I hadn't solve this issue, my [nginx - varnish - nxinx ] -> ELB solution worked until ELB changes IPs
But from some time ago we are using the same setup but with nginx compiled with jdomain plugin
So the idea is to place a nginx in the same host that varnish an there configure the upstream like this:
resolver 10.0.0.2; ## IP for the aws resolver on the subnet
upstream backend {
jdomain internal-elb-dns-name port=80;
}
that upstream will automatically reconfigure the upstream ips the IP if the ELB changes its addresses
It might not be a solution using varnish but it works as expected