Denial of Service attacks against remote hosts on the internet - amazon-web-services

I just got the bellow mail from Amazon, The instance have ubuntu as os and have ldap and apache2 installed,
LDAP Server is only used by one other instance, to auth it users(just ubuntu users) nothing else use the LDAP Authentication
Apache2 only have phpldapadmin and most of the time is down(start it when I need to make change to ldap)
I have tried to check the syslog and auth.log, cannot find any successful login attempt expect for mine (same user, key and IP ).
The report was sent while we were conducting a stress test about 1000 req/sec on a web app hosted on tomcat6 on the machine (the one that uses the LDAP Server to authentication) and the type of request that was used in the stress test doesn't require any type of authentication only load data from db and return a json array
we have only ssh,ldap and http open for LDAP Server machine(with the issue)
Question is: * How to find out the cause of the outbound traffic? Can the stress test cause this or is it just coincidence ? *
Dear Amazon EC2 Customer,
We've received a report that your instance(s):
Instance Id: xxx
has been making Denial of Service attacks against remote hosts on the Internet; check the information provided below by the abuse reporter.
This is specifically forbidden in our User Agreement: http://aws.amazon.com/agreement/
Please immediately restrict the flow of traffic from your instances(s) to cease disruption to other networks and reply this email to send your reply of action to the original abuse reporter. This will activate a flag in our ticketing system, letting us know that you have acknowledged receipt of this email.
It's possible that your environment has been compromised by an external attacker. It remains your responsibility to ensure that your instances and all applications are secured. The link http://developer.amazonwebservices.com/connect/entry.jspa?externalID=1233
provides some suggestions for securing your instances.
Case number: 000000-0
Additional abuse report information provided by original abuse reporter:
Destination IPs:
Destination Ports:
Destination URLs:
Abuse Time: Fri Jan 01 05:27:00 UTC 2016
Log Extract:
<<<
It has come to our attention that Denial of Service (DoS) attacks were launched from your instance to IP(s) 162.159.9.138 via TCP port(s) 53. Please investigate your instance(s) and reply detailing the corrective measures you will be taking to address this activity.
In the meantime, we have restricted network access to only inbound TCP ports 22 and 3389 on the instance(s) to prevent further abuse.
If you believe that you were compromised by an external attacker, the best recourse is to back up your data, migrate your applications to a new instance, and terminate the old one. Attempting to repair a compromised instance does not guarantee a successful cleanup in most cases. We recommend reviewing the following resources to ensure your EC2 environment is properly secured:
Amazon EC2 Security Groups User Guide:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html
AWS Security Resources:
http://aws.amazon.com/security/security-resources/
AWS Security Best Practices:
https://media.amazonwebservices.com/AWS_Security_Best_Practices.pdf

Related

encrypted links from google cloudrun svc to cloudrun svc

Backstory(but possibly can be skipped): The other day, I finished connecting to MySQL full SSL from a Cloud Run service without really doing any SSL cert stuff which was great!!! Just click 'only allow SSL' in GCP and click 'generate server certs', allow my Cloud Run service to have access to database instance, swap out tcp socket factory with google's factory and set some props and it worked which was great!
PROBLEM:
NOW, I am trying to figure out the secure Google Cloud Run service to Cloud Run service security and reading
https://cloud.google.com/run/docs/authenticating/service-to-service
which has us requesting a token over HTTP??? Why is this not over HTTPS? Is communication from my Docker container to the token service actually encrypted?
Can I communicate HTTP to HTTP between two Cloud Run services and it will be encrypted?
thanks,
Dean
From https://cloud.google.com/compute/docs/storing-retrieving-metadata#is_metadata_information_secure:
When you make a request to get information from the metadata server, your request and the subsequent metadata response never leave the physical host that is running the virtual machine instance.
The traffic from your container to the metadata server at http://metadata/ stays entirely within your project and thus SSL is not required, there is no opportunity for it to be intercepted.

How i can configure Google Cloud Platform with Cloudflare-Only?

I recently start using GCP but i have one thing i can't solve.
I have: 1 VM + 1 DB Instance + 1 LB. DB instance allow only conections from the VM IP. bUT THE VM IP allow traffic from all ip (if i configure the firewall to only allow CloudFlare and LB IP's the website crash and refuse conections).
Recently i was under attack, i activate the Cloudflare ddos mode, restart all and in like 6 h the attack come back with the Cloudflare activate. Wen i see mysql conections bump from 20-30 to 254 and all conections are from the IP of the VM so i think the problem are the public accesibility of the VM but i don't know how to solved it...
If i activate my firewall rules to only allow traffic from LB and Cloudflare the web refuses all conections..
Any idea what i can do?
Thanks.
Cloud Support here, unfortunately, we do not have visibility into what is installed on your instance or what software caused the issue.
Generally speaking you're responsible for investigating the source of the vulnerability and taking steps to mitigate it.
I'm writing here some hints that will help you:
Make sure you keep your firewall rules in a sensible manner, e.g. is not a good practice to have a firewall rule to allow all ingress connections on port 22 from all source IPs for obvious reasons.
Since you've already been rooted, change all your passwords: within the Cloud SQL instance, within the GCE instance, even within the GCP project.
It's also a good idea to check who has access to your service accounts, just in case people that aren't currently working for you or your company still have access to them.
If you're using certificates revoke them, generate new ones and share them in a secure way and with the minimum required number of users.
Securing GCE instances is a shared responsability, in general, OWASP hardening guides are really good.
I'm quoting some info here from another StackOverflow thread that might be useful in your case:
General security advice for Google Cloud Platform instances:
Set user permissions at project level.
Connect securely to your instance.
Ensure the project firewall is not open to everyone on the internet.
Use a strong password and store passwords securely.
Ensure that all software is up to date.
Monitor project usage closely via the monitoring API to identify abnormal project usage.
To diagnose trouble with GCE instances, serial port output from the instance can be useful.
You can check the serial port output by clicking on the instance name
and then on "Serial port 1 (console)". Note that this logs are wipped
when instances are shutdown & rebooted, and the log is not visible
when the instance is not started.
Stackdriver monitoring is also helpful to provide an audit trail to
diagnose problems.
You can use the Stackdriver Monitoring Console to set up alerting policies matching given conditions (under which a service is considered unhealthy) that can be set up to trigger email/SMS notifications.
This quickstart for Google Compute Engine instances can be completed in ~10 minutes and shows the convenience of monitoring instances.
Here are some hints you can check on keeping GCP projects secure.

In AWS, access control by ssh proxy + sshd

In AWS, our users(system admins) can access internal zone DB servers by using SSH tunneling without any local firwall's restrictions.
As you know, to access internal node a user must go through public zone gateway server first.
Because the gateway is actually a passage, I wish control the traffic from tunneled users on the gateway server.
For example, to get the currently connected ip addresses of all clients, to idendify the internal path(eg DB server ip) the user accessed futhermore I wish control the connection of unauthorized users.
To my dreams come true, I think below idea is really ideal.
1) Change sshd port to something other than 22. Restart sshd daemon.
2) Locate ssh proxy(nginx, haproxy or else) prior to sshd and let the proxy get the all ssh traffic from clients.
3) The ssh proxy route the traffic to sshd
4) Then I can see all user's activity by analize ssh proxy log. That's it.
Is it possible dream ?
Clever, but with a critical flaw: you won't gain any new information.
Why? The first S in SSH: "secure."
The "ssh proxy" you envision would be unable to tell you anything about what's going on inside the SSH connections, which is where the tunnels are negotiated. The connections are encrypted, of course, and a significant point of SSH is that it can't be sniffed. The fact that the ssh proxy is on the same machine makes no difference. If it could be sniffed, it wouldn't be secure.
All your SSH proxy could tell you is that an inbound connection was made from a client computer, and syslog already tells you that.
In a real sense, it would not be an "ssh proxy" at all -- it would only be a naïve TCP connection proxy on the inbound connection.
So you wouldn't be able to learn any new information with this approach.
It sounds like what you need is for your ssh daemon, presumably openssh, to log the tunnel connections established by the connecting users.
This blog post (which you will, ironically, need to bypass an invalid SSL certificate in order to view) was mentioned at Server Fault and shows what appears to be a simple modification to the openssh source code to log the information you want: who set up a tunnel, and to where.
Or, enable some debug-level logging on sshd.
So, to me, it seems like the extra TCP proxy is superfluous -- you just need the process doing the actual tunnels (sshd) to log what it is doing or being requested to do.

setup postfix SMTP on amazon ec2

Im trying to setup a postfix SMTP mail server on my amazon ec2 instance. i followed this guide http://cybart.com/how-to-install-and-configure-postfix-on-amazon-ec2/ and many other ones on configuring the main.cf
everytime i try to telnet my mail server mail.domain.com smtp it trys to connect to address XXX.XXX.XXX.XX but then operation is timed out and im unable to connect to remote host.
If you're still looking for guidance on how to setup an email server using Amazon EC2, I've written a guide for it. Even though some find using ec2 for email to be a hassle it doesn't have to be for you.
https://avix.co/blog/server-hosting
Here are some details about what the configuration will give you;
The system uses:
-Postfix as the smtp agent
-Dovecot as the client-side connect and mailbox manager
-Postgresql database to handle mail users, mail transports and the Spamassassin database
-Amavis (w/ Clamav & Spamassassin) for protection against viruses sent through email, and to facilitate an adaptive spam detection system that learns and corrects its behavior for each individual user
-Spamassassin as the spam filter and bayes to learn spam from ham and ham from spam
-Apache as the web setup, enables http & https connections to your site
-Squirrelmail as the default webmail. After the server is setup you will be able to check your email on yourdomain.com/mail from any browser on any device
The system supports:
-Multiple transports for different domain
Good luck, and let me know if I can clarify anything.

Unable to connect to AppFabric Cache Server

I have setup an appfabric(v1.1) cache server. The service is running under a service account and cluster configs are stored in SQL Server. the service account has rights on the sql server and able to configure successfully.
The admin console ,when opened with the service account user, is able to access cache.
But the problem is when i tried to connect to this caching service from a different machine, it is unable to connect.
ErrorCode<ERRCA0017>:SubStatus<ES0006>:There is a temporary failure. Please retry later
When i tried with xml configuration in a file share and service running in "NetWorkService" account, i was able to connect.
Following settings are verified on caching server.
Service is up and running on port 22233.
Firewall is turned off.
The client machine is granted permission to access cache cluster.
Running AppFabric cache as anything other than a “Network Service” is not supported.
Here’s the official documentation that hints at the limitation:
The Caching Service is installed to run under the Network Service
account. This means that for operations over the network, the Caching
Service uses the security credentials of the cache server's domain
computer account. The Caching Service uses the lower-privileged
Network Service account to help mitigate the damage that could be
caused by malicious attacks
But if you don’t find that convincing there’s this forum post from a MS person:
Velocity service running as Domain User is NOT supported.
If you think this is a horrible limitation… I agree with you.
AppFabric cache is a 100% WCF implementation. When I ran into this problem, I turned on WCF tracing and found the exception “The target principle name is incorrect”. AppFabric cache does not expose the ability to configure the principle.
In my testing with the cache running under a domain account, I found that if I called the cache across a domain boundary: It worked. If I called it from within the same domain it failed. My infrastructure guy said that the behavior made sense to him based on how credentials were presented in the different scenarios.
anyone else check out this:
http://blogs.msdn.com/b/appfabriccat/archive/2010/11/03/appfabric-cache-cache-servers-and-cache-clients-on-different-domains.aspx
caused me such a headache.
basically had to update my host file with the IP address and the actual servername of my AppFabric server.
and this resolved the error i was getting